PLAY PODCASTS
BroBots: Technology, Health & Being a Better Human

BroBots: Technology, Health & Being a Better Human

Jeremy Grater, Jason Haworth

343 episodesENExplicit

Show overview

BroBots: Technology, Health & Being a Better Human has been publishing since 2018, and across the 8 years since has built a catalogue of 343 episodes, alongside 1 trailer or bonus episode. That works out to roughly 190 hours of audio in total. Releases follow a weekly cadence.

Episodes typically run twenty to thirty-five minutes — most land between 27 min and 41 min — though episode length varies meaningfully from one episode to the next. Roughly 69% of episodes carry an explicit flag from the publisher. It is catalogued as a EN-language Technology show.

The show is actively publishing — the most recent episode landed 3 days ago, with 18 episodes already out so far this year. The busiest year was 2023, with 77 episodes published. Published by Jeremy Grater, Jason Haworth.

Episodes
343
Running
2018–2026 · 8y
Median length
35 min
Cadence
Weekly

From the publisher

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

Latest Episodes

View all 343 episodes

Is AI Legally Liable for Human Harm?

May 11, 202626 min

How Tech Is Ruining Your Happiness (And How to Fix It)

May 4, 202631 min

Can an AI Tool Actually Break Your Doom Scrolling Habit?

Apr 27, 202628 min

Why AI Propaganda Works—and How to Resist It

Apr 20, 202617 min

AI Just Built a Cyberweapon. Is Anyone Ready?

Apr 13, 202626 min

Why AI Won't Just Take Your Job — It'll Take Your Boss Too

Apr 6, 202628 min

Ep 338How AI Can See Heart Disease Coming Before It Kills You

E

Heart disease kills one person every 40 seconds. That number hasn’t changed in 30 years. Dr. John Osborne, a preventive cardiologist with two doctorates and 29 years in practice, has spent his career on a single question: why do we screen for cancers that kill a few percent of us and do nothing for the disease that kills 40%? In this episode, Jeremy and Jason sit down with Dr. Osborne to get the real story on cardiac CT with AI — the imaging technology that can detect, quantify, and track arterial plaque at sub-millimeter resolution, years before symptoms appear. If you track your bloodwork, wear a fitness device, or consider yourself health-forward — this is the conversation that fills the gap nobody warned you about.Guest Link:https://clearcardio.com/Key Moments:00:00 — Dr. Osborne’s case for preventive cardiology: why heart disease is the most under-screened killer02:43 — How cardiac CT evolved from "iPhone 0.5" to the 2026-era AI-powered tool he uses today05:35 — Why he gave up stress tests and heart caths in 2005 and never looked back08:16 — What AI actually adds: seeing and quantifying plaque invisible to the human eye, down to 0.1 cubic millimeters10:13 — When insurance pays for cardiac CT — and when it doesn’t (the preventive gray zone)14:50 — The “cardiac colonoscopy” concept: the case for screening before symptoms, not after18:11 — Coronary artery calcium score: the accessible $100 starting point, and what it can and can’t tell you31:54 — Lifestyle essentials: the 50% of risk that’s modifiable regardless of genetics35:00 — Family history decoded: why your sibling’s heart history matters more than your parents’36:12 — Nicotine myth-busting: Dr. Osborne on the "health guru" nicotine fad and why he thinks it’s dangerous38:05 — Supplements under scrutiny: natokinase, fish oil, red yeast rice — what the actual RCT data says

Mar 16, 202647 min

Ep 337The Real Risk of Trusting AI With Your Health Decisions

E

The internet taught everyone to self-diagnose. AI made it faster, more persuasive, and significantly more dangerous. Dr. Ajit Barron-Dhillon — ER physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because 'the internet said so' — joins Jason to talk about what AI-assisted health research actually does to people who think they're being smart about it. The conversation covers confirmation bias in clinical settings, supplement stacks optimized by ChatGPT, the cheerleader problem in medical AI, and why being above-average intelligent with these tools may make you more vulnerable, not less. If you use AI or Google to research your health, this conversation is specifically for you.Topics DiscussedWhy AI self-diagnosis is dangerous specifically for informed, health-conscious peopleWhat ER physicians are actually seeing when patients arrive with internet-sourced diagnosesHow confirmation bias turns AI research into an expensive form of being wrongWhen AI-assisted supplement optimization is useful — and when it's notWhy peer-reviewed research and AI training data are not the same thingWhat a responsible approach to AI health research actually looks likeCHAPTERS0:00 — Jeremy's Intro: Sick and Googling While Hosting an AI Health Episode1:17 — Kids Unplugging: Why In-Person Dating Is the New Counterculture2:40 — The No-Wi-Fi Coffee Shop and What the Internet Can't Tell You9:47 — I Let ChatGPT Optimize My Supplement Stack. Here's What Happened.11:59 — The Telemedicine Loophole: AI + Social Engineering for Prescriptions14:25 — Why Your Doctor Doesn't Know What You're Supplementing20:16 — NIH PubMed Is Being Scrubbed — and Why That Matters28:40 — She's Not Fighting Logic. She's Fighting Belief.32:58 — Star Trek, Dr. McCoy, and the Tricorder We're Almost Building37:11 — What a PubMed-Only AI Would Actually Look Like44:58 — The Tool Gets You 80% There. The Human Closes the Gap.

Mar 9, 202649 min

Ep 335When AI Becomes a Weapon: The Government Deal Anthropic Refused

E

The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no. That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built. Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.Topics Discussed:Why autonomous AI weapons systems default to nuclear launch in virtually every war game simulationWhat Anthropic's Claude can actually do — and why the US government wants it so badlyHow AI turns existing NSA surveillance infrastructure into something exponentially more dangerousWhy Open AI and Elon Musk said yes to the same deal Anthropic refusedWhy the people most confident they're using AI as a tool might be the ones AI ends up usingChapters0:00 — When AI Meets War: What We're Actually Talking About1:15 — What Claude Can Really Do (And Why the Government Wants It)4:18 — The Autonomous Cyber Weapon Problem5:28 — Why Anthropic Said No to the Money6:26 — Mass Surveillance, AI, and What's Already Running9:45 — When War Games Go Nuclear: The 95% Problem13:01 — AGI Is Already Here. We Just Didn't Call It That.17:33 — Why Anthropic's Refusal Might Be Their Smartest Business Move22:06 — Who's Actually Using WhomMORE FROM BROBOTS:Get the Newsletter!

Mar 3, 202626 min

Ep 334Using AI to Work Through Anxiety: Does It Actually Help?

E

Most people using AI for anxiety aren't following a protocol — they stumbled into it. Emma Klint, a writer and Substack creator, accidentally discovered she was doing exposure therapy by typing 'I don't know' over and over into an AI chat window. In this episode, Jeremy and Jason sit down with Emma to stress-test what AI-assisted self-reflection actually looks like: the real benefits, the obvious limits, and the uncomfortable question of whether outsourcing your feelings is the same thing as actually feeling them. If you've wondered whether talking to a robot about your problems is legitimate or just avoidance with extra steps — this conversation will give you a clearer answer.Guest website:(Over)thinking Out Loud - Emma KlintTopics discussed:Why using AI for anxiety isn't the same thing as outsourcing your feelingsHow one writer accidentally discovered she was doing exposure therapy in her chat windowWhat makes AI different from journaling — and why that difference matters for anxious brainsWhen AI mental health use helps, and when it's just avoidance with extra stepsWhy neurodivergent people may be getting the most out of these conversationsHow to tell the difference between AI that's helping you think and AI that's just telling you what you want to hearChapters:0:00 — The 2AM Chatbot Question: Is This Therapy or Avoidance?0:42 — Using AI for Anxiety: What We're Actually Testing3:04 — The Judgment-Free Space: Why 'I Don't Know' Changes Things5:01 — AI as a Journal That Writes Back9:23 — Is the Advice Good, or Is Naming the Feeling Enough?11:00 — When AI Tries to Be Blunt (And Still Fails)13:00 — Why Prompt Engineering Is Already Outdated for This15:50 — ADHD, Neurodivergence, and Why AI Might Be the Real Unlock18:18 — Outsourcing vs. Externalizing: The Line That MattersMORE FROM BROBOTS:Get the Newsletter!

Mar 2, 202620 min

Ep 334The Next Privacy Crisis Isn't Your Data - It's Your Thoughts

E

Most people think AI data collection means targeted ads and leaked emails - but that's already yesterday's problem. Bruce Randall, AI and quantum practitioner, argues that cognitive data - the kind recorded by brain-computer interfaces before conscious thought even forms - is the frontier nobody is legislating, regulating, or even discussing clearly yet. In this episode, we stress-test where quantum computing, Neuralink, hive mind dynamics, and energy infrastructure are actually headed - and what regular people need to understand now, before the decisions get made without them. Walk away knowing what questions to ask, even if nobody has the answers yet.Topics Discussed:Why the Neuralink user's cursor moved before he consciously directed it — and what that means for data ownershipHow quantum computing functions as a prediction engine for complex variables, and why most people will never see it but will feel its effectsWhat a "hive mind" actually is and why shared thought networks create an ownership problem nobody has solvedWhy digital workers face more displacement risk than tradespeople — and the 15-minute daily habit that changes thatWhether mass collection of behavioral and emotional data is a public good or a slow handover of your most private informationHow to think about cognitive data protection before the decisions get made without youChapters:0:00 — The Moment That Changed How Bruce Thinks About AI1:28 — Quantum Computing Without the Headache: A Real Explanation3:19 — Why Quantum Is the Engine Behind AI — Not a Replacement for It4:21 — Jobs, AI, and Who Actually Gets Replaced First6:47 — What Reiki Has to Do With Brain-Computer Interfaces7:43 — Hive Minds, Neuralink, and the Thought Ownership Problem11:44 — Can Your Personality Be Uploaded Without Your Knowledge?13:35 — Is Mass Data Collection Actually Good for Society?18:09 — Where Does the Energy Come From for All of This?19:46 — The One Thing You Should Do This Week to Stay RelevantGuest Website:https://theaihumanparadox.com/

Feb 23, 202620 min

Ep 333Can AI Actually Build Utopia or Is That Just Hype?

E

Are we getting too lazy to think without AI?You use it for emails, reports, research. It saves time. But every shortcut you take, every task you hand over, you feel a quiet trade-off happening. Efficiency for autonomy. Speed for depth. Convenience for critical thinking.In this episode:Why AI acts as a cosmic mirror that reflects our worst habits back at usHow laziness becomes the trap when machines can outthink, outwork, and outlast usWhat happens when humans drift into digital dependency instead of staying groundedWhy short-term pain might be necessary for long-term transformationHow to decide which tasks to outsource and which require you to stay sharpWhat the hero's journey teaches us about navigating AI's crucibleGuest: Jeff Burningham, author of The Last Book Written by a Human and former gubernatorial candidate. He believes AI is forcing humanity to confront an uncomfortable question: Are we ready to evolve, or will we choose the easy path and lose ourselves in the process?🔗 Links:Jeff Burningham's WebsiteThe Last Book Written by a HumanChapters (Benefit-Driven Labels):0:00 — Why AI feels like a trap we're setting for ourselves2:30 — AI as a cosmic mirror: Reflecting humanity's recorded data5:30 — Short-term pessimism, long-term hope (and why pain matters)9:30 — The laziness problem: What happens when AI outworks us14:00 — Embodied humans vs. digital drift: Two paths forward18:30 — Why the hero's journey applies to AI transformation21:00 — Job loss and male unemployment: The civil unrest risk25:00 — The old game vs. the new game: Choosing transformation31:00 — Can governments regulate AI fast enough? (Probably not)MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group

Feb 16, 202634 min

Ep 332AI Doesn't Want Your Job - It Wants to Hire You

E

Artificial intelligence is moving beyond cyberspace, and its first move isn't replacing us, it's renting us.Services like RentAHuman.ai let AI agents hire people for real-world errands while AI-only social networks reveal something darker: given all human knowledge, these systems don't build utopias. They replicate our worst behaviors - wealth hoarding, tribalism, even manifests about ending humanity. The difference? They never sleep, never feel shame, and now they want physical autonomy through human labor.Topics discussed:- Why giving AI "meat space" control is more dangerous than job loss- How AI social networks expose the myth of benevolent superintelligence- Why we're voluntarily funding algorithmic manipulation at $20/month- What augmented reality gamification will do to human decision-making- Why billionaire accountability is impossible—and what that means for AI oversight- The uncomfortable truth about who controls you when systems can override biologyThis is for people who suspect they're already losing autonomy but can't articulate how. Two skeptical tech observers examine why resistance feels impossible, and whether dystopia and utopia might be indistinguishable when the right chemicals are involved.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group

Feb 9, 202636 min

Ep 331How Deep Fakes Are Justifying Real Violence

E

AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time.We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything.What we cover:Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotionHow the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truthWhat happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymoreWhy this breaks institutional trust forever — Once credibility is destroyed, it doesn't come backHow Russia's playbook became America's playbook — PsyOps tactics are now domestic policyWhat to do when you can't believe your own eyes — Practical skepticism in an age of slopChapters:00:00 — Intro: The Deep Fake Problem in Minneapolis02:37 — Why Immigrants Are Being Targeted With Fake Narratives04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version07:18 — Alex Prettie Must Killed While Filming ICE Agents09:44 — Nikita Armstrong's Tears Were Added By AI11:45 — The Putin Playbook: Flood the Zone With Confusion14:13 — How Deep Fakes Break Institutional Trust Forever17:37 — This Isn't Politics—It's Basic Human Decency19:26 — Trump's 35% Approval Rating and What It Means22:03 — What You Can Do When You Can't Trust Your EyesSafety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group

Feb 2, 202623 min

Ep 330Should You Trust AI With Medical Advice?

E

ChatGPT just launched a medical advice tool, and doctors are divided on whether AI should diagnose your symptoms before a real physician does.You already Google your symptoms. You already use AI when you can't afford the vet bill or can't get a same-day appointment. The question isn't whether people will use AI for medical advice—they already are. The question is whether it's safe, useful, or just another liability trap.Why rural hospital closures are forcing people toward AI healthcare — and what happens when your only doctor is a chatbotHow for-profit medicine creates the same "get you off our doorstep" incentive that vetted Jeremy's dog with a $1,200 estimate for throwing upWhat AI gets right about medical triage — and where it dangerously homogenizes care into actuarial chartsWhen asking better questions matters more than getting perfect answers — and how AI can arm you to challenge bad diagnosesWhy privacy advocates warn against giving medical data to AI companies — and what happens when insurance companies start buying accessWhat happens when Docbot calls Lawbot — and you're left holding the liabilityThis is The BroBots: two skeptical nerds stress-testing AI's real-world implications. We're not selling you on the future. We're helping you navigate it without getting screwed.Chapters:0:00 — Intro: ChatGPT's New Medical Tool 2:15 — Why Rural Hospitals Are Closing and AI Is Filling the Gap 6:43 — The $1,200 Vet Bill ChatGPT Helped Me Avoid 13:35 — How AI Homogenizes Care and Kills Medical Unicorns 17:50 — The Liability Problem: When Docbot Calls Lawbot21:16 — Final Take: Use It Carefully, Own Your HealthSafety/Disclaimer Note:This episode discusses AI medical advice tools and personal experiences. It is not medical advice. Always consult a licensed healthcare professional for medical decisions.

Jan 26, 202621 min

Ep 329Who Actually Pays for AI's Environmental Cost?

E

Microsoft announced they'll cover the environmental costs of their AI data centers - electricity overages, water usage, community impact.But here's the tension: AI energy consumption is projected to quadruple by 2030, consuming one in eight kilowatt hours in the U.S. Communities have already blocked billion-dollar data center projects over water and electricity fears. Is this Microsoft accountability, or damage control?Charlie Harger from "Seattle's Morning News" on KIRO Radio joins us with mor eon why this matters now:Why AI data centers are losing community support and costing billions in cancelled projectsWhat it actually takes to power AI—and why current infrastructure can't handle itHow Microsoft's commitment differs from silence from OpenAI, Google, and Chinese AI companiesWhether small modular reactors and fusion energy can solve the problem or just delay itWhy this is ultimately a West vs. East geopolitical race with environmental consequencesWhat happens when five of the world's most valuable companies all need the same scarce resources----GUEST WEBSITE:www.mynorthwest.com----MORE FROM BROBOTS:Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on YoutubeJoin our community in the BROBOTS Facebook group

Jan 19, 202621 min

Ep 328When AI Chatbots Convince You You're Being Watched

E

Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.What we cover:Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stopWhat "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming backThe physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it againThis isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.Resources mentioned:AI Recovery Collective: AIRecoveryCollective.comPaul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions5:47 — The First Red Flag: Data Kept Disappearing9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"16:15 — Suicide Loops: How Guardrails Failed Completely21:38 — Why OpenAI Refused to Respond for a Month24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones27:56 — The Discord Group That Kicked Him Out30:03 — How to Use AI Safely After Psychosis31:06 — Where to Get Help: AI Recovery CollectiveThis episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

Jan 12, 202632 min

Ep 327Can AI Replace Your Therapist?

E

Traditional therapy ends at the office door — but mental health crises don't keep business hours.When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapur shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture.This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons.You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards.Key Topics Covered:How a suicidal business executive used ChatGPT as a 24/7 therapy supplementThe "persona-based prompting" technique that makes AI conversations actually helpfulWhy traditional therapy's monthly gap creates dangerous vulnerability windowsPrivacy protection: exact ChatGPT settings to anonymize your mental health dataThe RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responsesHow to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)Why deepfakes are potentially more dangerous than nuclear weaponsThe $25 million Hong Kong deepfake heist that fooled finance executives on ZoomChatGPT-5's PhD-level intelligence and what it means for everyday usersHow to protect elderly parents from AI voice cloning scamsNOTE: This episode was originally published September 16th, 2025Resources:Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur----GUEST WEBSITE:https://rajeev.ai/ ----TIMESTAMPS0:00 — The 2 AM mental health crisis therapy can't solve1:30 — How one executive went from suicidal to stable using ChatGPT5:15 — Why traditional therapy leaves dangerous gaps in care9:18 — Persona-based prompting: the technique that actually works13:47 — Privacy protection: exact ChatGPT settings you need to change18:53 — How to anonymize your mental health data before uploading24:12 — The RTCA prompt structure (Role, Task, Context, Ask)28:04 — Are humans even ethical enough to judge AI ethics?30:32 — Why deepfakes are more dangerous than nuclear weapons32:18 — The $25 million Hong Kong deepfake Zoom heist34:50 — Universal basic income and the 3-day work week future36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple

Jan 5, 202637 min

Ep 326How to Use AI to Prevent Burnout

E

ChatGPT diagnosed what five doctors missed. Blood work proved the AI right. Here's how to stop guessing about your health.EPISODE SUMMARY:You're grinding through burnout with expensive wearables telling conflicting stories while doctors have four minutes to shrug and say "sleep more." Your body's sending signals you can't decode — panic attacks that might be blood sugar crashes, exhaustion that contradicts your readiness score, symptoms that don't match any diagnosis.Garrett Wood fed his unexplained low testosterone and head injury history into ChatGPT. The AI suggested secondary hypogonadism from pituitary damage. Blood work confirmed it. Three weeks on tamoxifen, his testosterone jumped from 300 to 650.In this episode, Garrett breaks down why your Oura Ring might be lying, how a "panic attack" patient discovered her real problem was a glucose crash (not anxiety), and the old-school performance test that tells you if you're actually ready to train — no device required.Learn how to prompt ChatGPT with your blood work, cross-reference biometric patterns doctors miss, and walk into appointments with informed questions that turn four-minute consultations into actual solutions.✅ KEY TAKEAWAYS:How to use ChatGPT to interpret blood work and generate doctor questionsThe "monotasking test" that beats your wearable's readiness scoreWhy panic attacks might actually be glucose crashesHow to tighten feedback loops with wearables + CGM + AIRecording doctor visits and translating medical jargon with AINOTE: This episode was originally published on August 12th, 2025.⏱️ TIMESTAMPS:00:00 — When Your Wearable Says You're Fine But You're Not02:17 — ChatGPT Diagnosed Secondary Hypogonadism05:42 — The Balance Test That Beats Your Readiness Score09:45 — Why "Anxiety" Might Be Blood Sugar15:00 — How to Prompt AI with Blood Work23:37 — Recording Doctor Visits + AI Translation30:48 — Disease Management vs Well-Being OptimizationGuest WebsiteGnosis Therapy (Garrett Wood's practice) Garrett on LinkedIn

Dec 29, 202536 min

Ep 324Scooby-Doo Has the Best Take on Masculinity (Seriously)

E

What Does It Mean to Be a Real Man? (According to AI)What happens when you ask ChatGPT to define masculinity as Trump, Obama, Joe Rogan, and Scooby-Doo? We discovered something disturbing about how AI is homogenizing human belief - and why that matters for deepfakes, social control, and the future of what we think is "real." Plus: why Scooby-Doo might be the most honest voice on modern manhood.MORE FROM BROBOTS:Get the Newsletter!Timestamps:0:00 The NFL Comment That Started Everything3:15 ChatGPT's 10 Rules for Being a "Real Man"6:40 When We Asked AI to Channel Joe Rogan9:50 Barack Obama's Version of Masculinity12:15 Donald Trump's Answer (That He'd Never Actually Say)16:45 Why ChatGPT Censored Andrew Tate20:30 Rick Sanchez Explains Cosmic-Level Grit24:10 How This Becomes a Deepfake Weapon28:05 Why We're More Like AI Than We Think32:50 Scooby-Doo's Perfect Take on Manhood36:20 Why We're Still Arguing About This in 202539:00 The Lesson: Be More Like Scooby-DooHashtags:#AIethics #Masculinity #ChatGPT #Deepfakes #ModernMasculinitySafety Note:This episode explores AI bias, political manipulation potential, and contains discussions of public figures. All AI-generated responses are clearly labeled as simulations for educational/entertainment purposes.

Dec 22, 202539 min
2025 Jeremy Grater, Jason Haworth