PLAY PODCASTS
The AI podcast for product teams

The AI podcast for product teams

45 episodes

Governance, Context, and the Org-Design Reckoning

May 12, 202645 min

The Most Important Data Points in AI Right Now

Apr 29, 202618 min

Your AI Strategy Is a Pile of Demos

Let’s stop pretending. Most AI strategies are just a collection of pilots that nobody had the courage to kill. The data this period is brutal: 95% of genAI pilots stall. Only 11% reach production in financial services. Microsoft — the biggest company in the world, with the best distribution on the planet — just reorganized Copilot because nobody internally could agree on what it was supposed to be. And while enterprises burn cycles debating governance frameworks, a new class of startups is quietly replacing entire job functions. Not assisting. Replacing. The gap between the people who get this and everyone else isn’t a skills gap. It’s a courage gap. This edition is about which side you’re on.What You’ll Learn in This EditionThis edition confronts the uncomfortable reality that most AI investments are producing demos, not outcomes — and the structural reasons why.* 🎙 Why agents are automating your thinking, not just your tasks — and why that distinction matters more than any model release* ✍️ Copilot’s identity crisis is the most important product failure of 2026 so far* 👉 The single variable that predicts AI maturity 7x better than technology choices* 1️⃣ Why advertising AI use is now a financial liability for professional services firms* 2️⃣ The inference cost crisis that threatens every AI business model — including OpenAI’sEpisode 4: The Era of Agents — Your Cognition Is the Product NowWe mapped three years of AI evolution in this episode and landed somewhere uncomfortable. Era one gave us wrong answers. Era two gave us wrong context. Era three — agents — is giving us wrong actions. And the stakes compound with each era because AI is no longer just saying things. It’s doing things.Brittany brought the number that should haunt every product leader: only 6% of organizations have fully deployed any kind of agent. Copilot hit 30% weekly active usage after six months — meaning 70% of enterprise users basically stopped opening it. The tools are moving at an extraordinary pace. Almost nobody is keeping up.We profiled four startups winning the point-solution war that most people haven’t heard of. But the real conversation was about what happens when you hand your thinking to an agent. Not your typing. Not your scheduling. Your thinking — the research, the monitoring, the analysis, the synthesis. Something changes in you when you do that. And most people haven’t reckoned with what that means.“We’ve trained generations of people to think linearly. Step one, step two, step three, fill out this form, follow this process. Agents don’t work like that. Agents require you to think in terms of outcomes, connections, and context.” — ArpyListen now: Spotify | Apple Podcasts | YouTubeYou’re invited to join the AI Strategy Experiments Zoom call todayToday (March 27) at 1pm ET we’re hosting a small group of strategists and builders and designers sharing their experiments and questions. Register here.$490 billion in enterprise AI spending is delivering nothing. That’s not a technology failure. It’s a value creation failure. AI Value Acceleration exists to close that gap — diagnosing where AI value stalls and building playbooks that actually work. Value Assessment in 3 weeks. Value Amplification to go deep. Value Acceleration to prove what works. aivalueacceleration.comCopilot Didn’t Fail. It Succeeded at Not Knowing What It Is.Bloomberg reported that internal confusion over Copilot’s role, personality, and strategy has prompted a reorganization at Microsoft.Read that again. Internal confusion. Not external competition. Not technical limitations. The people building Copilot couldn’t agree on what it was for. Microsoft had everything a product could dream of — billions in funding, integration into every Office app, the largest enterprise distribution network on earth, and access to the most powerful models available. It didn’t matter. Without a clear product identity, all that distribution just delivered confusion at scale.The uncomfortable truth: most AI products shipping today have the same disease. They’re a bundle of capabilities searching for a purpose. They demo beautifully. They onboard poorly. They get abandoned quietly. If the biggest company in the world can’t brute-force its way to product-market fit for an AI assistant, what makes you think your team can skip the hard work of defining what your AI product is actually for?BCG: Why Usage Is Up but Impact Is NotEmployee-centric organizations are 7x more likely to be AI mature. Not 7% more likely. Seven times. Employee-centricity explains ~36% of variance in AI maturity outcomes. Model selection explains almost none of it.Over 85% of organizations remain stuck at basic task assistance. Fewer than 10% have reached anything resembling semiautonomous collaboration. The teams pulling ahead didn’t start with better tools. They started with cultures where people felt safe to experiment, fail, and teach each other what they learned. HBR confirmed it separately: peer influenc

Mar 27, 202646 min

75% of Enterprise AI Fails. The Fix Isn't a Better Model.

Every influencer is drooling over Claude Code skills files. Every product team is chasing the next model release. But for two years, the data has been screaming the same thing: capability isn’t the bottleneck. Context is. This edition unpacks what that actually means — why structured business knowledge is the highest-leverage investment a product team can make, what the “context wars” look like from the inside, and why the teams winning aren’t the ones with the best models. They’re the ones whose AI actually understands their business.What You’ll Learn in This EditionThis edition confronts the structural reason most AI products fail — they’re missing the context that makes capability useful.* Why Juan Sequeda from ServiceNow says “hope is not a strategy” — and what to build instead of better prompts* The three-layer knowledge framework that gives AI a shared language across your entire organization* CNBC’s “silent failure at scale” investigation reveals why 91% of AI models degrade without anyone noticing* Microsoft just adopted ontology — the same concept Juan has championed for 20 years — as the foundation of its agentic AI architecture* Citadel Securities data shows software engineer job postings rising 11% YoY despite the displacement narrativeEpisode 3: Context Is the New Moat — Why Your AI Needs Business Knowledge, Not Better PromptsEvery influencer is drooling over skills files and prompt templates. Juan Sequeda, Principal Scientist at data.world (acquired by ServiceNow), has spent 20 years proving that none of it works without structured business knowledge underneath. In this episode, Juan breaks down the three-layer framework — business metadata, technical metadata, and the mapping layer that creates real semantics — and explains why the teams investing in ontology today will compound value across every AI use case they build next. His blunt assessment of skills files as a production strategy: “Hope is an interesting strategy. It’s not one that I add to my strategy.”“If you just edit in skills, I don’t think that’s gonna be the solution to your problem. You’ll have a great POC. It’ll work for the use cases you tested on. Are you willing to put your career on the line and put that in production?” — Juan SequedaListen on Spotify | Apple Podcasts | YouTubeContext isn’t a nice-to-have. It’s the architecture layer that determines whether your AI product delivers consistent, measurable value or drifts into silent failure. PH1 built this framework to illustrate what Juan Sequeda has been researching for two decades: intent, background, examples, and templates aren’t prompt engineering tricks — they’re the structural foundation that transforms an AI system from a “forever intern” into a strategic partner. Without them, you’re hoping the model figures out what “order” means in your business. Hope, as Juan puts it, is not a strategy.RAG Was the Answer. Now It’s a Symptom of the Real Problem.RAG dominated for two years as the default way to give LLMs context. But as context windows expanded from 8K to a million tokens, the question shifted. This video breaks down when RAG still matters — vast, dynamic datasets and cost efficiency — and when long context windows make the retrieval layer unnecessary. The strategic implication for product teams: RAG was always a workaround for a deeper problem. The real question was never “how do I retrieve the right document?” It was “does my system actually understand my business?” That’s the context layer Juan Sequeda is building — and it sits beneath RAG, long context, and every other implementation detail.In spite of the displacement signals, software engineer job postings are up 11% year over year. But read the fine print: a posting titled “Software Engineer” increasingly means “engineer who can operate LLMs in production” or “build RAG pipelines.” The title stayed the same — the job changed. If your team hasn’t redefined what “engineering” means in the context of AI-augmented workflows, you’re hiring for yesterday’s role.Product Impact ResourcesThe pattern across these resources is consistent: the teams pulling ahead are the ones investing in context, knowledge, and governance infrastructure — not chasing the next model release. Capability is table stakes. The moat is how deeply your product understands the business it serves.* Gartner predicts 80% of enterprises pursuing AI will use knowledge graphs by 2026 to enhance context and reasoning. The shift from “better prompts” to “structured knowledge” is no longer theoretical. The Role of Knowledge Graphs in Building Agentic AI Systems* Microsoft adopted ontology as the foundation of its agentic AI architecture — Fabric IQ, Foundry IQ, and Work IQ create a semantic layer that gives agents shared business understanding. Microsoft Adopts Ontology-Based IQ Layer for Agentic AI* Nathan Lasnoski argues that enterprise knowledge graphs are the foundation for moving from vibe coding to scalable agentic development — without semanti

Mar 16, 202652 min

The teams pulling ahead aren't the ones with the best models

AI products are shipping faster than ever. But shipping isn’t impact. The teams pulling ahead aren’t the ones with the best models — they’re the ones who can prove their product moves the business. This edition is about that gap. How to measure what matters, where the biggest barriers to impact are hiding, and what the latest research says about getting AI products to actually drive growth. Because the real competitive advantage isn’t AI. It’s knowing whether your AI is working.What You’ll Learn in This EditionThis edition cuts through the noise to focus on the measurement gap — the difference between shipping AI and proving AI drives growth.* The Power/Speed/Impact/Joy bullseye — a calibration framework for AI products that actually drive growth* A Nature paper reveals why removing friction from AI may be destroying the learning your team needs* John Maeda on why design teams are being hollowed out — and why PMs are next* Benedict Evans on why even OpenAI can’t solve product-market fit with capability alone* Research that should change how your team thinks about AI-assisted skill buildingThanks for reading Product Impact | AI Strategy, Value Creation, AI UX! This post is public so feel free to share it.Episode 1: Why Your AI Metrics Are Lying to You - Framework for improving AI product performanceYour AI product might be fast, capable, and technically impressive — and still not drive the growth your business needs. In this episode, Brittany Hobbs and I introduce the Power, Speed, Impact, and Joy bullseye — a calibration framework borrowed from F1 racing. The teams winning aren’t shipping more features. They’re measuring different things entirely. We break down a three-layer eval approach and why most completion metrics are hiding the signals that matter.“Success does not mean satisfaction. If someone stops engaging, does that mean they solved their problem — or that they were frustrated and left?” — Brittany HobbsListen on Spotify | Apple Podcasts | YouTubeYour Role Isn’t Shrinking. It’s Being Hollowed Out.John Maeda — Three major tech companies have restructured design teams into “prompt engineering pods.” Maeda’s #DesignInTech 2026 calls it what it is: the elimination of design judgment from the product process. “When you replace a designer with a prompt, you don’t lose the pixels. You lose the questions that should have been asked before anyone opened a tool.” This applies to product managers too — if your PM’s job becomes prompt-wrangling instead of deciding what to build and why, you’ve automated the wrong layer. The roles aren’t disappearing. The judgment inside them is.Featured Resource: Strategy for Measuring & Improving AI ProductsThe gap between what AI products ship and what they prove is where growth stalls. This framework moves teams from tracking activity — token counts, completion rates, session length — to defining and measuring the outcomes that actually drive business impact. Most teams ship features and assume engagement means success. It doesn’t. If your team can’t answer “is this AI feature making the business better?” with data, you’re flying blind. The framework covers product discovery through scale, with concrete steps for building measurement into your AI product from the start — not bolting it on after launch.Read the full resource at ph1.caWaterfall: we’ll build you a car in 18 months. Agile: here’s a skateboard, we’ll iterate. AI: here’s a photorealistic render of a Lamborghini that doesn’t start. We’ve never made it easier to build something that looks incredible and does absolutely nothing. AI development doesn’t need more iteration — it needs someone asking “does this thing actually drive?”If your team is celebrating demos instead of outcomes, you’re already behind the teams that measure first and ship second.Two years of capability gains. Almost no reliability improvement. This is the chart that should be on every product team’s wall — because it explains why your AI demos brilliantly and fails in production. Capability without reliability isn’t a product. It’s a liability.If your team can’t name which type of AI they’re building, they can’t measure whether it’s working. Six categories that force precision. — Narain JashanmalProduct Impact ResourcesThe resources in this edition make one thing clear: the teams investing in measurement and deliberate friction are pulling ahead, while the ones chasing capability are stalling. These resources challenge the assumption that faster and more capable automatically means better outcomes.* Removing struggle from AI workflows destroys the learning that builds expertise. Teams should audit which friction to keep and which to cut. Against Frictionless AI — Inzlicht & Bloom in Nature* AI users learned 17% less without any efficiency gains. How your team uses AI matters more than whether they use it. How AI Impacts Skill Formation — Shen & Tamkin RCT* Two years of capability gains with only modest reliability improvement. The barrier to gr

Mar 2, 202635 min

What Happens to Your Product When You Don’t Control Your AI?

AI was supposed to help humans think better, decide better, and operate with more agency. Instead, many of us feel slower, less confident, and strangely replaceable.In this episode of Design of AI, we interviewed Ovetta Sampson about what quietly went wrong. Not in theory—in practice. We examine how frictionless tools displaced intention, how “freedom” became confused with unlimited capability, and how responsibility dissolved behind abstraction layers, vendors, and models no one fully controls.This is not an anti-AI conversation. It’s a reckoning with what happens when adoption outruns judgment.Ovetta Sampson is a tech industry leader who has spent more than a decade leading engineers, designers, and researchers across some of the most influential organizations in technology, including Google, Microsoft, IDEO, and Capital One. She has designed and delivered machine learning, artificial intelligence, and enterprise software systems across multiple industries, and in 2023 was named one of Business Insider’s Top 15 People in Enterprise Artificial Intelligence.Join her mailing list⁠ | Right AI | Free Mindful AI Playbook Why 2026 Will Force Teams to Rethink How Much AI They Actually NeedThe risks are no longer abstract. The tradeoffs are no longer subtle. Teams are already feeling the consequences: bloated tool stacks, degraded judgment, unclear accountability, and productivity that looks impressive but feels empty.The next advantage will not come from adding more AI. It will come from removing it deliberately.Organizations that adapt will narrow where AI is used—essential systems, bounded experiments, and clearly protected human decision points. The payoff won’t just be cost savings. It will be the return of clarity, ownership, and trust. This is going to manifest first with individuals and small startups who were early adopters of AI. My prediction is that this year they’ll start cutting the number of AI models they pay for because the era of experimentation is over and we’re now entering a period where deliberate choices will matter more than how fast the model is. Read the full article on LinkedIn. Do You Really Need Frontier Models for Your Product to Work?For most teams, the honest answer is no.Open-source and on-device models already cover the majority of real business needs: internal tooling, retrieval, summarization, classification, workflow automation, and privacy-sensitive systems. The capability gap is routinely overstated—often by those selling access.What open models offer instead is control: over data, cost, latency, deployment, and failure modes. They make accountability visible again. This video explains why the “frontier advantage” is mostly narrative:Independent evaluations now show that open-source AI models can handle most everyday business tasks—summarizing documents, answering questions, drafting content, and internal analysis—at levels comparable to paid systems. The LMSYS Chatbot Arena, which runs blind human comparisons between models, consistently ranks open models close to top proprietary ones.Major consultancies now document why enterprises are switching: predictable costs, data control, and fewer legal and governance risks. McKinsey notes that open models reduce vendor lock-in and compliance exposure in regulated environments.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Subscribe for free to receive new posts and support my work.What Happens When “Freedom” Becomes an Excuse Not to Set Boundaries?We’ve confused freedom with capability. If a system can do something, we assume it should. That logic dissolves moral boundaries and replaces responsibility with abstraction: the model did it, the system allowed it.When no one owns the boundary, harm becomes an emergent property instead of a design failure.What If AI Doesn’t Have to Be Owned by Corporations?We’re going to experience a rise in AI experts challenging the expectations that Silicon Valley should control AI.What if AI doesn’t need to be centralized, rented, or governed exclusively by corporate interests?On-device models and open ecosystems offer a different future—less extraction, fewer opaque incentives, and more meaningful choice.Follow Antoine Valot as him and Postcapitalist Design Club explore new ways of liberating AI.Are We Using AI for Anything That Actually Matters?Much of today’s AI usage is performative productivity and ego padding that signals relevance while eroding self-trust. We’re outsourcing thinking we are still capable of doing ourselves.AI should amplify judgment and creativity. Use this insanely powerful technology to make you achieve greater outcomes, not deliver a higher amount of subpar work to the world.If We Know the Risks Now, Why Are We Still Acting Surprised?The paper “The AI Model Risk Catalog” removes the last excuse.Failure modes are documented. Harms are mapped. Blind spots are known.Continuing to deploy without contingency planning is no longer innovation—it’s neglige

Jan 13, 202648 min

When AI Isn’t the Answer, It’s the Problem

In Episode 48 of the Design of AI podcast, we unpack why the most common AI promises are collapsing under real market pressure. AI was meant to unlock strategic work, expand opportunity, and elevate creativity. Instead, UX and design roles are disappearing, agencies are cutting creative staff while buying automation, and freelance work is being devalued as execution becomes cheap.This episode is not about panic. It is about reality. Value still exists, but it is concentrating among those who can integrate AI into real systems, navigate ambiguity, and own outcomes rather than outputs.🎧 Apple Podcasts🎧 SpotifyKey Insights About AI at WorkWhat the evidence shows once the optimism is removed.MIT Media Lab: ChatGPT Use Significantly Reduces Brain Activity (2025)Early AI use reduces attention, memory, and planning, weakening independent thinking when models lead the process.Wharton / Nature: ChatGPT Decreases Idea Diversity in Brainstorming (2025)AI-assisted brainstorming narrows idea diversity, producing faster output but more uniform thinking across teams.Science Advances / SSRN: The Effects of Generative AI on Creativity (2024)AI improves fluency and polish while consistently reducing originality and conceptual depth.arXiv: Human–AI Collaboration and Creativity: A Meta-Analysis (2025)Human-led AI collaboration improves quality slightly, but AI reduces diversity without strong framing and judgment.arXiv: Generative AI and Human Capital Inequality (2024)AI disproportionately benefits those with systems thinking and judgment, widening gaps between experts and generalists.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.Realities of Being AI Early AdoptersThe Raised Floor Trap by Hang XuAI makes baseline output easy. What it doesn’t make easy is integration, orchestration, or delivery inside real teams. Most people reach adequacy. Very few compound value. We’re not able to generate the type of value we’re sold on.👉 Follow Hang Xu for insights about the realities and challenges of the job marketAI UX as a Growth BarrierAI systems are far more capable than they appear, but their UX blocks growth. They don’t know how to help unless you know how to ask, structure, and specify intent. So even after hours of work trying to grow your AI abilities, you’ll often hit a ceiling because these systems can’t interpret our capabilities and gaps.👉 Follow Teresa Torres for expert Product Discovery strategies and tactics.Help Shape 2026We’re planning upcoming episodes on career resilience, AI adoption, and where durable value still exists.Take the 3-minute listener survey and tell us what would actually help you next year.Which Skills Are Being Replaced by AI?AI is not replacing jobs all at once. It is removing pieces of them.Execution, summarization, and surface analysis are increasingly automated. What remains defensible are skills rooted in judgment, accountability, synthesis across messy contexts, and decision-making under uncertainty.Shira Frank & Tim Marple: Cubit — Task-Level Reality Check (2025)Cubit breaks jobs into discrete tasks, revealing where LLMs already substitute human labor and where judgment, context, and accountability still hold. It makes visible how roles erode gradually, not all at once.MIT Sloan: Why Human Expertise Still Matters in an AI World (2024)AI performs well in structured domains but consistently fails in ambiguity, ethics, and long-horizon tradeoffs. These limits define why senior expertise remains defensible, but only when it is exercised, not delegated.Harvard Business School: Why Judgment Remains a Competitive Advantage (2023)AI can generate options and recommendations, but it cannot own outcomes. Responsibility, consequence, and decision accountability remain human burdens and human moats.Lots of News This WeekCopilot didn’t fail. It succeeded at the wrong thing.Microsoft proved AI can clear security, compliance, and procurement at massive scale. But Copilot hasn’t changed behavior. Universal assistants optimize for adoption, not dependence.🔗 https://www.linkedin.com/posts/adragffy_copilot-didnt-fail-it-succeeded-at-the-activity-7406719225714855936-G9H3AI credit limits aren’t a pricing tweak. They’re a reckoning.Credit caps expose the real problem. AI has marginal cost, and teams must now prove ROI per call, not ship more features.🔗 https://www.linkedin.com/posts/adragffy_ai-activity-7407130709678567424-IzG-AI trust is breaking faster than adoption.AI chat logs expose identity, not transactions. Scale without support erodes trust, loyalty, and long-term value.🔗 https://www.linkedin.com/posts/adragffy_llm-ai-customerexperience-activity-7408835025787461633-j56YAI ROI isn’t what Anthropic says it is.Anthropic claims 80% of organizations have achieved AI ROI. They haven’t. They’ve reached table stakes. The report shows gains concentrated in efficiency, faster tasks, and internal automation, while only 16% reach end-to-end,

Dec 22, 202530 min

The Creativity Recession and Why Product Leaders Must Reverse It Now

Our latest guest is Maya Ackerman — AI‑creativity researcher, professor, and author of Creative Machines: AI, Art & Us (Wiley), as well as founder of WaveAI and LyricStudio (View recent colab with NVidia).Maya’s perspective is not just insightful — it’s a necessary reality check for anyone building AI today. She challenges the comforting narrative that AI is a neutral tool or a natural evolution of creativity. Instead, she exposes a truth many in tech avoid: AI is being deployed in ways that actively diminish human creativity, and businesses are incentivized to accelerate that trend.Her research shows how overly aligned, correctness-first models flatten imagination and suppress the divergent thinking that defines human originality. But she also shows what’s possible when AI is designed differently — improvisational systems that spark new directions, expand a creator’s mental palette, and reinforce human authorship rather than absorbing it.This episode matters because Maya names what the industry refuses to admit. The problem is not “AI getting too powerful,” it’s AI being used to replace instead of elevate. Businesses are applying it as a cost-cutting mechanism, not a creative amplifier. And unless product leaders intervene, the damage to creativity — and to the people who rely on it for their livelihoods — will become irreversible.Listen to the Episode on Spotify, Apple Podcasts, YoutubeWe’re engineering a global creative regression and pretending we aren’t.Generative AI could radically expand human imagination, but the systems we deploy today overwhelmingly suppress it. The literature is unequivocal:* AI boosts creative output only when tools are intentionally designed for exploration, not correctness.* When aligned toward predictability, AI drives conformity and sameness.* The rise of “AI slop” is not an insult — it’s the logical outcome of misaligned incentives.* New evidence shows that AI-assisted outputs become more similar as more people use the same tools, reducing collective creativity even when individual outputs look “better.”* Homogenization is measurable at scale: marketing, design, and written content generated with AI converge toward the same tone and syntax, lowering engagement and cultural diversity.* Repeated reliance on AI weakens human originality over time — users begin outsourcing ideation, losing confidence and capacity for divergent thought.Resources:* The Impact of AI on Creativity: https://www.researchgate.net/publication/395275000_The_Impact_of_AI_on_Creativity_Enhancing_Human_Potential_or_Challenging_Creative_Expression* Generative AI and Creativity (Meta-Analysis): https://arxiv.org/pdf/2505.17241* AI Slop Overview: https://en.wikipedia.org/wiki/AI_slop* Generative AI Enhances Individual Creativity but Reduces Collective Novelty:https://pmc.ncbi.nlm.nih.gov/articles/PMC11244532/* Generative AI Homogenizes Marketing Content:https://papers.ssrn.com/sol3/Delivery.cfm/5367123.pdf?abstractid=5367123* Human Creativity in the Age of LLMs (decline in divergent thinking):https://arxiv.org/abs/2410.03703 BOTTOM LINE: If your product optimizes for correctness, brand safety, and throughput before originality, you are actively contributing to the global collapse of creative quality. AI must be designed to spark—not sanitize—human imagination.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.Award-winning creative talent is disappearing at scale, and the trend is accelerating.The global creative workforce is shrinking faster than at any time in modern history. Companies claim AI is “enhancing creativity,” yet most restructuring reveals the opposite: AI is being deployed primarily to cut labor costs. In general, layoff announcements top 1.1 million this year, the most since 2020 pandemic.What’s happening now:* Omnicom announced 4,000 job cuts and shut multiple agencies — Reuters reporting: https://www.reuters.com/business/media-telecom/omnicom-cut-4000-jobs-shut-several-agencies-after-ipg-takeover-ft-reports-2025-12-01/* WPP, Publicis, and IPG executed multi-round layoffs across design, writing, strategy, and production.* Digiday interviews confirm AI is used mainly to eliminate junior and mid-level creative roles: https://digiday.com/marketing/confessions-of-an-agency-founder-and-chief-creative-officer-on-ais-threat-to-junior-creatives/The most important read on the future & destruction of agencies comes from Zoe Scaman. She always brings a powerful and necessary mirror to the shitshow that is modern corporate world. Read it here:Freelancers and independent creatives are being hit even harder:* UK survey: 21% of creative freelancers already lost work because of AI; many report sharply lower pay — https://www.museumsassociation.org/museums-journal/news/2025/03/report-finds-creative-freelancers-hit-by-loss-of-work-late-pay-and-rise-of-ai/* Illustrators, motion designers, and concept artists report declining commissions as clie

Dec 5, 202546 min

The Real Reason Tech Products Fail

Our latest episode features Jessica Randazza Pade, Head of Brand Activation & Commercialization at Neurable. Named to Campaign US’s 40 Over 40 and ELLE Magazine’s 40 Under 40, Jessica is an award-winning global digital marketer, business leader, and storyteller. She explains why AI is not a value proposition, how to turn vague use cases into measurable outcomes, and why making technology invisible is often the strongest competitive advantage.“If the user can’t articulate what’s different in their life because of your product, you’re selling a vitamin—not a painkiller.”Listen on Apple Podcasts | SpotifyShape Our 2026 ResearchWe’re mapping where teams are struggling with AI adoption and what tools, frameworks, and support they need in 2026. Your input directly shapes our annual research and the topics we cover.Take the survey → https://tally.so/r/Y5D2Q5AI has lowered the cost of prototyping but raised the bar for adoption. Most AI products fail because they launch demos instead of durable workflows, rely on large models where small ones would work better, ignore trust, or sell “time savings” instead of business outcomes. Organizations resist tools that feel risky, inaccurate, unproven, or misaligned with real workflows. Complicated architecture, poor UX, weak personalization, and unclear ROI all compound the problem. Here’s a sample of it:#3: Your product doesn’t actually learn. Fake personalization destroys trust.#4: One hallucination can end adoption permanently.#8: “Saving time” is not a business case—outcomes are.#11: Organizational silos suffocate AI products.#17: Without a workflow and measurable ROI, you don’t have a product.AI will not save your product. Only reliability, trust, workflow clarity, governance readiness, and measurable value delivery will.Read the full article → https://ph1.ca/blog/why-your-AI-product-will-failsThe Year of AI ValueThis video covers why 2026 marks a turning point where AI is judged not by novelty or intelligence but by measurable ROI, workflow impact, and operational reliability. It explains why businesses are shifting from “AI features” to fully redesigned AI-enabled systems.We are past the point of buying AI based on promisesAI buyers no longer invest because the tech is impressive. They invest when it:* delivers measurable ROI* reduces operational and compliance risk* integrates into existing workflows* produces consistent results* overcomes organizational resistance and silosIf you’d like us to create a full episode on why AI products fail, add a comment to this post.The AI Adoption Curve Is About to FlipThis video explains how organizations are moving from experimentation to structural integration, redesigning roles, responsibilities, and workflows around AI. It also highlights early signals that distinguish “tool usage” from true operational adoption.Watch →Featured Thinker: Stuart Winter-TearThis week we’re spotlighting the insightful work of Stuart Winter-Tear, founder of Unhyped. His writing reframes LLM inconsistency as a reflection of the chaotic and contradictory data ecosystems they’re trained on—challenging assumptions about rationality, coherence, and system behavior.LinkedIn | Substack Featured Reads1. The GenAI Divide: Why 95% of enterprise GenAI projects failMIT’s 2025 State of AI in Business report finds that 95% of GenAI pilots generate no measurable ROI, mainly due to lack of workflow integration and unclear value metrics.https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf2. Apple Mini Apps and the new distribution frontierGreg Isenberg outlines how Apple Mini Apps may redefine onboarding, distribution, and reach across the entire consumer ecosystem.https://x.com/gregisenberg/status/19893414608947118383. Calum Worthy’s “2wai” and the ethics of selling the unimaginableThe actor launched an app enabling people to generate AI avatars of deceased relatives—a revealing look at how AI now commercializes ideas once considered unthinkable.https://www.businessinsider.com/calum-worthey-2wai-ai-dead-relatives-app-launch-2025-14. The Complete Guide to Building with Google AI StudioMarily Nika provides a comprehensive, practical guide to building production-ready applications with Google’s AI ecosystem.5. SNL’s Glen Powell AI Sketch: When satire becomes a warningThe Atlantic unpacks how SNL’s AI sketch captures the cultural moment—where AI shifts from hype to comedic critique, signaling deeper public skepticism.https://www.theatlantic.com/culture/2025/11/snl-glen-powell-ai-sketch/684944/Coming Up on the PodcastOur upcoming guests include:* Ovetta Sampson — Chief Human Experience Officer & AI Design leaderhttps://www.ovetta-sampson.com/* Dr. Maya Ackerman — Generative AI researcher and creativity systems experthttps://maya-ackerman.com/* Leonardo Giusti, Ph.D. — Head of Design, Archetype AIhttps://www.archetypeai.io/If you haven’t participated yet, please take our 2026 survey and help shape where our research goes next: https://tally.s

Nov 18, 202544 min

Designing Agents That Work: The New Rules for AI Product Teams

Our latest episode explores the moment AI stops being a tool and starts becoming an organizational model. Agentic systems are already redefining how work, design, and decision‑making happen, forcing leaders to abandon deterministic logic for probabilistic, adaptive systems.“Agentic systems force a mindshift—from scripts and taxonomies to semantics, intent, and action.”🎧 Listen on Spotify🍎 Listen on Apple PodcastsAnd if you want to go deeper, check out Kwame Nyanning’s book, Agentics: The Design of Agents and Their Impact on Innovation. It’s the definitive field guide to designing agentic systems that actually work.Most striking for me was when discussed that we need to move from pixel-perfect to outcome-obsessed. Designers and product teams have for so long been more obsessed on the delivery of the output and now is time to be most concerned on the impact on customers.The hard truth: Most organizations are trying to graft AI onto brittle systems built for predictability. Agentic design demands something deeper: ontological redesign, defining entities, relationships, and intents around customer outcomes, not internal structures. If you can’t model intent, you can’t build an agent.Key takeaway: Intent capture is the new UX. Products that succeed will anticipate user context, detect discontent, and adapt autonomously.Featured Articles: Where Reality Collides with AmbitionAI Has Flipped Software Development — Luke WroblewskiWroblewski lays out how AI has upended the software stack. Interfaces now generate code. Designers define the logic while engineers review and govern it. The result? Faster cycles but a dangerous illusion of progress. Design intuition becomes the new compiler, and prompt literacy replaces syntax. The real risk is velocity without comprehension; teams ship faster but learn slower.Takeaway: Speed isn’t the problem; blind acceleration is. Governance, evaluation, and feedback loops are now design disciplines.Agentic Workflows Explained — The Department of ProductThis piece exposes what it really takes to build functioning agents: memory, planning, orchestration, cost control, fallback logic. If your “agent” doesn’t break, it’s probably not learning. Resilient systems require distributed cognition, agents reasoning and retrying within boundaries. Evaluation‑first design becomes the only safeguard against chaos.Takeaway: If your agent never fails visibly, it’s not thinking deeply enough. Failure is how agents learn.Featured Videos: Cutting Through the NoiseThis viral video sells the dream—agents at the click of a button. The reality? Building bots has never been easier, but building agents remains brutally hard. Real agents need long‑term memory, adaptive interfaces, and feedback loops that learn from success and failure. Wiring APIs is not design; it’s plumbing. Until agents can reason, reflect, and recover, they’re glorified scripts.Reality check: The tools are improving, but the discipline is not.A rare honest take. This one focuses on the HCI, orchestration, and reliability problems that still plague agentic systems. We’re close to autonomous task completion, yet nowhere near persistent agency. The real challenge isn’t autonomy—it’s alignment over time.Takeaway: Advancement is fast, but coherence is slow. Designing for recovery and evaluation is the new frontier.Join Our Next WorkshopIf you want to turn these insights into action, join our upcoming Disruptive AI Product Strategy Workshop. You’ll learn how to pressure‑test AI ideas, model agentic systems, and build products that survive beyond the hype. There’s a special 2‑for‑1 offer at the link—bring a teammate and cut the noise together.Recommended Resource: AI & Human Behaviour — Behavioural Insights Team (2025)BIT’s report is a must‑read for anyone designing human‑in‑the‑loop systems. It dissects four behavioural shifts: automation complacency, choice compression, empathy erosion, and algorithmic dependency.Their experiments reveal that AI assistance can dull cognition—users who relied most on recommendations learned less and questioned less. They also found that friction builds trust; brief pauses and explanations improved comprehension and retention. The killer insight? Transparency alone doesn’t work. People often overestimate their understanding when systems explain themselves.Takeaway: Don’t make users “trust AI.” Make them verify it. Design friction that protects judgment.Recommended Reads: What to Study Next* Computational Foundations of Human‑AI Interaction — Redefines how intent and alignment are measured between humans and agents.* Understanding Ontology — “The O-word, “ontology” is here! Traditionally, you couldn’t say the word “ontology” in tech circles without getting a side-eye.”* The Anatomy of a Personal Health Agent (Google Research) — A prototype for truly personal, proactive AI systems that act before users ask.* What is AI Infrastructure Debt? — Why ignoring the invisible architecture behind agents is the next form of

Oct 29, 202544 min

Play, Prompts, and the Perils of Incrementalism

In our latest episode, Michelle Lee (IDEO Play Lab) makes the case that play unlocks the next billion-dollar AI market. She reminds us that kids don’t stop at answers—they ask what if and turn shoes into cars or planes. That divergent mindset is exactly what product teams have lost.“Play is one of the best ways to challenge the norms, to think wide, imagine new possibilities.”Michelle shares:* How IDEO discovered billion-dollar opportunities (like PillPack, later acquired by Amazon) by staying curious.* Why teams should sometimes use older, glitchier versions of AI tools, because the “mistakes” spark better ideas.* Why incrementalism burns teams out and how designing for attitudinal loyalty beats chasing short-term metrics.🎧 Listen here → Play unlocks the next billion-dollar AI marketUncomfortable Truth: Most “AI strategies” today are adult strategies — converging too quickly, chasing predictability, and mistaking incremental progress for innovation. That’s why the breakthroughs are happening elsewhere.Product Workshop: Find your Disruptive PathIf your roadmap looks like everyone else’s, you’re already behind. Our next AI Product Strategy Workshop (Oct 30) is built for teams who want to:* Go beyond features and efficiency to discover truly disruptive opportunities.* Use LLMs as intelligent sparring partners to pressure-test fragile ideas before they waste time and budget.Spots are limited → Register hereHard-Cutting Take: If your roadmap reads like your competitors’, it’s not strategy—it’s risk management dressed up as vision.Incrementalism is the Silent KillerWe’ve all felt it: the slow grind of incremental product decisions that look safe but quietly kill ambition. My new piece argues that incrementalism is the silent killer of AI products—a trap for teams rewarded for predictability instead of progress.Read it on LinkedIn → Incrementalism is the Silent Killer of AI ProductsUncomfortable Truth: Incrementalism feels safe because it rarely fails spectacularly. But it guarantees mediocrity—and in AI, mediocrity is indistinguishable from irrelevance.AI Launches to WatchA wave of new releases will reshape how we design and ship AI products:* OpenAI: Stripe/Shopify integrations + new pre-designed prompts for professionals.* Anthropic: Chrome plugin + Claude 4.5 Sonnet, a faster, cheaper model that expands prototyping and evaluation capabilities.* OpenAI Sora 2: Newly launched today, unlocking endless possibilities for video and creative storytelling, signaling a profound shift in how generative tools will shape the creative industries.These aren’t just upgrades—they’re reshaping commerce and the browser itself. The integration of Stripe and Shopify signals AI’s deepening role in transactions, while Anthropic’s Chrome plugin points to a future where the browser becomes a true intelligent workspace. It’s likely why Atlassian just acquired The Browser Company (maker of Arc and Dia). These moves aren’t incremental improvements; they’re like a rushing river, pushing the entire industry forward whether teams are ready or not.The next frontier isn’t who has the biggest model—it’s who controls the browser as the operating system for work. And then when we looking beyond, it will be who controls our real world experiences… (more on that soon with an upcoming guest)When Projects Go Off the RailsEven as the models improve, they’re only as good as the prompts and evaluations behind them. We’ve seen how easily “comprehensive business cases” collapse when fabricated ROI, vendor costs, and timelines are passed off as fact.It’s the Wizard-of-Oz problem: behind the curtain, most AI projects are stitched together with fragile assumptions.Uncomfortable Truth: Most AI decks aren’t strategy—they’re theater. And like any stage play, the curtain eventually falls.Hidden Pitfalls of AI Scientist SystemsA new paper, “The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems” (arXiv, Sep 10, 2025), warns about the risks of fully automated science pipelines. By chaining hypothesis generation, experimentation, and reporting end-to-end, teams risk producing results that look authoritative but mask invisible errors and systemic failures. (arxiv.org)Uncomfortable Truth: Automation without visibility doesn’t accelerate discovery—it accelerates blind spots.Articles & Ideas We’re Tracking* Prompts.chat → A growing open library of prompt patterns that shows why better prompt design, not just better models, is becoming the key differentiator for teams.* AI in the workplace: A report for 2025 (McKinsey) → McKinsey highlights that while adoption is accelerating, most organizations hit cultural and skills barriers long before technical ones.* The Architecture of AI Transformation (Wolfe, Choe, Kidd, arXiv) → This 2×2 framework shows why most companies get stuck in incremental “legacy loops” rather than unlocking transformational human-AI collaboration.* TechCrunch: Paid raises $21M seed to pioneer results-based billing with AI

Oct 1, 202541 min

AI Product Strategy FAQ, Minus the Bullsh*t

Our latest episode features Nicholas Holland (SVP of Product & AI at HubSpot) and explains how AI is actually changing go-to-market teams:* AI cuts rep research time and turns calls into structured insight* “AI Engine Optimization” (AEO) is becoming the new SEOThis conversation isn’t speculative—it’s a blueprint. Listen to Episode 42 on Apple Podcasts🚨 Upcoming Workshop: Sept 18 — AI Product Strategy for Realists Use promocode pod30 at checkout to get 30% off your registration!Join us for a live 90-minute workshop that goes beyond the hype. We’ll walk through real frameworks, raw mistakes, and how to make AI product strategy actually work—for small teams, scale-ups, and enterprise leaders.👉 Save your seat nowAI Product Strategy FAQ, Minus the Bullsh*tOver the past few months, we’ve been collecting the most common—and most misunderstood—questions about AI product strategy. What we found were recurring patterns of confusion, hype, and hope. This article breaks down those questions one by one with honest answers, uncomfortable truths, and hard-won lessons from teams actually building and shipping AI products.Each section includes:* A blunt reality check (“Uncomfortable Truth”)* A strategic lens for tackling it* A sticky insight to anchor your messaging* A practical takeawayThis is not a “how AI works” explainer. This is how to make it useful—inside a real product.Q1: How do we choose the right use case for AI in our product that actually delivers value?Uncomfortable Truth: The best use cases might be internal—not flashy or customer-facing. If you’re just “adding AI” for the optics, you’re already off-track.Strategic Frame: Don’t chase the cool feature—hunt down the messiest workflow and blow it up.Always Remember: Your AI should solve a problem your users complain about—not a problem your team finds interesting.Research This: Map the top 10 recurring tasks inside your product (or across your internal ops). Which of them have the highest time cost and lowest user satisfaction? That’s your AI opportunity space.Real Example: Altan (natural language app builder); internal fraud detection automation; AI for helpdesk triage.Takeaway: Pick the ugliest, least scalable problem your users hack around with spreadsheets. Then automate that.Q4: How do we handle data privacy and ethics when integrating AI features?Uncomfortable Truth: Most tools don’t offer true privacy—they use your data to train their models. That’s not a technical flaw—it’s a business choice.Strategic Frame: If trust is central to your brand, bake it into the infrastructure. Build sandboxes. Offer guarantees. Publish your governance.Always Remember: You don’t get to ask users for their data and their forgiveness.Research This: Ask your legal, compliance, or procurement partners what requirements would be non-negotiable for adopting a third-party AI tool. Then apply those to your own product.Example Guidance: Make “zero training from user data” a tiered feature—or your default.Takeaway: If you’re targeting enterprise buyers, your AI feature won’t get through procurement unless you have strict privacy toggles and a clear usage log.Q5: How do we measure the success of AI features in a product?Uncomfortable Truth: More engagement doesn’t always mean more value. In AI, time spent might mean confusion—or masked frustration. People may feel delight and friction in the same moment, and without qualitative research, you won’t know which signal you’re shipping.Strategic Frame: Define one high-value outcome. Build just enough UI to validate whether users reach it.Always Remember: Don’t just watch what users do—listen for what they expected to happen.Research This: Run a usability test where you ask users to explain what they expect the AI feature to do before using it—then again after. Once you've delivered an output that surprises them, ask them what outcomes it enables.Takeaway: In a contract automation tool, the success metric isn’t “time in app”—it’s “first draft accepted with zero edits.” That’s your true win signal.Q6: What’s the best way to communicate AI capabilities to non-technical stakeholders or users?Uncomfortable Truth: AI isn’t novel anymore—outcomes are.Strategic Frame: Sell transformation, not tech. Show how life is better with the tool than without.Always Remember: Once someone experiences the magic, it doesn’t matter what powers it.Research This: Ask 5 users to explain your AI feature to a friend, using their own words. Their phrasing will tell you how clearly the value lands—and what metaphors or language they trust.Examples:* GlucoCopilot: Turns data chaos into peace of mind.* Flo: Makes symptom tracking feel intuitive and empowering.* Lovart: Auto-generates brand kits from a single prompt.Takeaway: Everyone’s building outputs. You win by delivering outcomes. Spreadsheets are useful to power users—but most people just want the insight and what to do next. AI should skip the formula and deliver the finish line.Q7: How do we monetize AI in a wa

Sep 16, 202546 min

The End of Product Teams as We Know Them

🎙️ Listen on Spotify | Apple Podcasts | YouTubeI recently spoke with Maor Shlomo, founder of Base44—the platform that lets anyone build apps, tools, and games just by describing them to an AI. In six months, he built Base44 solo and sold it to Wix for $80M. It’s the clearest signal yet: the rules of building have changed, and most teams aren’t ready.We dug into:* Why vibe coding crushes the myth that innovation requires big teams and big funding.* How cross-domain generalists will thrive while narrow specialists get sidelined.* Why software that doesn’t become agent-driven will be left for dead.* The ruthless advantage of starting over quickly when the build cost is near zero.Maor’s blunt take: “If one person can go this far alone, do we need whole teams to achieve the same things?”🎧 Full episode: Listen on SpotifyThanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.The uncomfortable truth: Interfaces are vanishingVibe coding strips away menus, clicks, and UIs. You speak, and the machine builds. The UX profession must decide—adapt to this new layer of interaction, or watch relevance slip away.* Speak ideas, skip interfaces.* Abstraction layers are collapsing.* Creation is now a conversation.🔗 Read the full post on LinkedIn📅 AI Product Strategy Workshop — Register hereThis isn’t a “future of work” talk. It’s a hands-on reality check.* Spot where AI will gut existing workflows—and where the real opportunities lie.* Pressure test your product strategy against the agent-driven future.* Learn how to pivot faster than incumbents weighed down by legacy.If you think you can wait this out, you’ll already be too late.There’s a 2-for-1 deal right now using this link.* SSRN study: AI is already displacing workers across industries.* Challenger, Gray & Christmas: 10,000+ AI-driven layoffs in the first seven months of 2025.* World Economic Forum: up to 30% of U.S. jobs could be automated by 2030.* Anthropic CEO Dario Amodei: “Half of entry-level white-collar jobs may disappear, pushing unemployment to 10–20% within five years.” ([Axios](https://www.axios.com/2025/05/28/ai✍️ I recently published Navigating Contradictions: A Manifesto for Product Teams in an Era of Change.In it, I confront the contradictions head-on: speed vs. depth, AI optimism vs. ethical risk, innovation vs. trust. Teams that refuse to wrestle with these tensions won’t survive.Key line: “Product teams must learn to hold space for competing truths—where speed and discovery coexist with responsibility and depth.”🔗 Read the full article on MediumAgencies and consultancies have thrived on labor arbitrage. That arbitrage just died. As AI agents mature, they won’t just support consultants—they’ll cannibalize them. The uncomfortable truth: if your business model depends on armies of analysts or designers, you’re already obsolete.🔗 Read the full post on LinkedIn👉 What do you think?I’ll admit it: I once wrote vibe coding off as a gimmick. Now, I see it as the end of UI as we know it. Every interface has been an abstraction—an awkward compromise between human thought and digital execution. Those compromises are being stripped away at speed.The uncomfortable truth? The gap between an idea and a product is collapsing. That means fewer roles, fewer gatekeepers, and a brutal shift in how work gets done.Have you tried vibe coding? Does it excite you, scare you—or both? Reply and let’s talk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Sep 9, 202541 min

From AI as Tool to AI as Teammate: Lessons from Atlassian & What’s Next for Product Leaders

🎙 Episode 40: Atlassian’s Secrets to Successful AgentsIn this episode, Jamil Valliani (VP & Head of Product AI at Atlassian) shares how they embed AI across Jira, Confluence, and Trello through intelligent agents that blend into workflows—far from mere “+AI” buttons. He emphasizes starting small with tangible prototypes to build momentum and leadership alignment, showing that AI gains stick when they're experienced, not explained.Highlights from the episode:* Hands-on AI adoption at Atlassian: transforming workflows, not just products* From friction to flow: how prototypes bridge skepticism and trust* AI as teammate, not feature—designing for collaboration, not automation* Adoption baked into experience—make AI habitual, not optional“The most successful teams will treat AI not as a button you press, but as a teammate you collaborate with.”Listen on Spotify | Listen on Apple | Watch on YouTube —and share one workflow where AI acting more like a teammate could unlock unexpected value.About the Guest:Jamil Valliani brings two decades of product leadership (including 15 years at Microsoft) to Atlassian, where he’s spearheading AI-powered design.* LinkedIn* Atlassian RovoUpcoming Workshop: AI Product StrategyProduct teams everywhere are facing the same challenge: leadership wants AI integration for competitive advantage, but without certainty about which AI products will actually be valuable to customers.When: Thursday, September 18, 2025 (online)What you’ll gain:* Diagnose the highest-leverage AI use cases* Prototype with precision—avoid costly detours* Craft a resilient strategy that scales beyond pilot phaseRegister on Eventbrite and get a 2 for 1 promo.Learn to Synthesize or ElseIn a world awash with data, the real advantage lies not in knowing more—but in drawing clarity from the noise. Product and design leaders must become the translators of complexity, turning abundant knowledge into purposeful, actionable insight.h/t Stuart Winter TearEmerging Shift: Role-Dissolving AIFigma, OpenAI, and others are signaling a paradigm shift: AI is merging design, engineering, and research into a unified discipline. The competitive edge now lies in craft, judgment, and cross-disciplinary fluency—not siloed specialization.AI Merging Tech Roles, Favoring Generalists: Figma CEO Dylan FieldFeatured Video: Why Designers & Engineers Must Rethink Workflows for AI to Deliver Real ValueThis video pressurizes teams to question legacy workflows. Without overhauling collaboration models, decision-making structures, and design intent, even advanced AI remains misunderstood or underleveraged.Research To Reframe Your Strategy1️⃣ Mixture of Reasoning (MoR)Why it matters: LLMs can be trained to switch between reasoning styles—stepwise logic, analogies, symbolic reasoning—without prompt engineering.Strategy shift: Build assistants that adapt reasoning to task: planning one moment, diagnosing the next.Quick test: A/B fixed vs. adaptive reasoning in support/search flows to spot gains in mixed-query handling.2️⃣ In-Context Learning as Implicit Weight UpdatesWhy it matters: Transformers tweak their own behavior on-the-fly based on prompt context—no retraining required.Strategy shift: Enable products to adapt within interaction sessions, not over multiple deploy cycles.Quick test: Prototype context-aware replies and monitor when users feel seen vs. served.3️⃣ Chain-of-Thought (CoT) MonitorabilityWhy it matters: Exposing AI’s reasoning steps helps catch misalignment before it reaches users—but this safety window is fragile.Strategy shift: Don’t equate explanation with trust. For high-stakes domains, embed traceability and risk alerts.Quick test: Add CoT transparency to UX, measure user trust shifts when rationale is visible.Follow my co-host, Brittany Hobbs for essential research and product insights news.Your Next ChallengeMost teams drop AI into their products like sprinkles on a cupcake. But strategy—true product strategy—demands AI baked into the experience, from the core outward.Reply here or email me.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Subscribe for free to receive new posts and support my work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Aug 14, 202547 min

The Risks & Research of Over Reliance on AI

After a frustrating week of trying to wrangle AI outputs, we decided to explore the risks of overreliance on AI. It’s good for us to question our tools. It enhances our processes and challenges us to find the right tools.Listen on Spotify | Listen on Apple Podcasts | Watch on YouTubeIn this episode, we say the quiet parts out loud. Not only are LLMs often feeding us incorrect information, but over-trusting these systems poses a serious risk.We can look at this Rolling Stone article headline and immediately laugh it off. It is insane to believe this will happen to anyone we know. However, in Mark Zuckerberg’s vision of the AI future, your friends will be bots. The loneliness epidemic is real. One in three Americans feels lonely every week. Data from Harvard’s Making Caring Common Project supports that loneliness is tied to increasing feelings of anxiety, not part of this country, and being about more than social isolation. 65% of respondents blame “our society,” pointing to a lack of confidence in our way of life and institutions.So, it should be no surprise that Harvard Business Review found that the top three use cases of 2025 involved loneliness and navigating life's stresses.AI could quickly become the next addiction for a world desperate for solutions. The fact that there’s demand for robo-companionship shouldn’t be treated as validation for building more tools to disassociate them from life. Let’s go back to exploring this topic from the perspective of business users.Understanding GenAI’s Productivity GainsAs we barrel into the AI-powered era, we can take one of two perspectives:* GenAI products are the next evolution of SaaS: Precise tools for specific workflows* LLMs are the next evolution of social media, where instead of degrading our interpersonal relationships, AI will addict us to easy and often incorrect informationThe majority of the research identified productivity gains and time savings, which would support the goal of GenAI as a professional advantage. But when you dig into the data, there are concerns.Many are funded by Microsoft, OpenAI, and Google, like this one showing that GitHub Copilot users completed tasks 55.8% faster than the control group. While that result was impressive, they were being assessed on their ability to complete a very basic task. The paper’s results were boosted by showing that people with less experience benefit more from coding assistants, something that should worry anyone concerned about being replaced by cheaper talent who are boosted by AI.But these results were refuted in a separate study where no DevOps productivity gains were found from using the same GitHub Copilot. That study found the code quality to be poor, leading to a 41% increase in bugs!Remember that GitHub is owned by Microsoft and powered by OpenAI’s foundation model, ChatGPT.This contradictory data highlights a paradox of GenAI: The technology is increasingly more successful at a basic task-level, but shouldn’t be over-relied on to do our work for us. This Danish study hammers that point home: Time saved by AI offset by new work created. If we shake ourselves out of the AI hype stupor, we can critically examine the current state of LLMs more like SaaS tools. 95% of SaaS tools available won’t help you and your business. Once you find the right tool for your hyper-specific use case, the AI product’s success will depend on its implementation and the first-party data entered into it.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! More Research about Using AIBehavioural research about AI should be considered a counterpoint to the benefits of AI. Yes, leveraging GenAI will have productivity gains in specific circumstances. But the technology also brings with it risks and considerations that built into the design and business case of “should we build this” discussions.* Increased AI use linked to eroding critical thinking skills* When experiencing time pressures, we’re more susceptible to misinformation* AI systems are already capable of deceiving humans* People Facing Life-or-Death Choice Put Too Much Trust in AI* AI’s Trust Problem: Twelve persistent risks of AI that are driving skepticismCatch up on Recent Design of AI Episodes31. AI is Disrupting Architecture and Lessons for Digital Product TeamsGuest Matthew Krissel (FAIA) explores how AI is reshaping architectural design and what digital product teams can learn about process, creativity, and scale from the built environment.30. Take Control of AI’s Predictive Power – Tyler Hochman, ForethoughtTyler Hochman shares how businesses can operationalize AI for forecasting and insights by targeting high-value, repeatable problems and unlocking underutilized data.29. Trust is a Double‑edged Sword: AI will Transform Services – Sarah Gold, Projects by IFSarah Gold explains how AI changes our relationship with services and why it’s urgent to rethink trust, transparency, and accountability in product design.28. AI will Transform Pro

May 5, 202539 min

AI Promises us More Time. What Should we do With it?

When reports like Adecco’s Global Workforce of the Future survey find that the average saving for workers using AI is 1 hour a day, we should question this. * What did those workers do with their time savings? * Should that time savings benefit the employer or the employee?* Can we trust such a hard-to-measure stat?Our latest episode tackles this and other disruptions happening to the creative and production processes. Matthew Krissel is the Co-Founder of the Built Environment Futures Council and a Principal at Perkins&Will. For over two decades, he has led transformative architectural projects across North America and internationally. We discussed how AI is disrupting architecture and lessons for digital product teams. He really struck powerful points many times during our conversation about questioning the role of time and permanence in a world when we want more, faster.Other points covered in the conversation:* Commoditizing design makes production easier, enabling societies to tackle challenges like housing shortfalls* Commoditizing design devalues other vital processes, like community engagement, respectful place-making, and longevity of projects* Over-indexing AI’s potential as a workflow optimizer, while under-indexing the potential to reimagine how complex projects are planned and operationalizedListen on Spotify | Listen on Apple PodcastsIn this newsletter, I’d like to tackle the concept of time saving and what it means from the perspective of crafting an AI strategy. Here was the most important quote from the episode: So just because something took half the time it did before, what happened is we just did more. So we just filled the time. Is there something higher and better use? I suspect that somewhere along the line the designs got better. Also I suspect that somewhere along there was diminishing returns. We were just doing more because we could not that it was actually yielding anything better. Are you gonna focus on fewer, but better increase your quality? Are you going to spend more time on business development or some entrepreneurial side hustle? Just go home early? What you decide to do as we start to gain productivity time is going to shape a lot of where this is all happening.Newsletter recommendation: Scott BelskyEssential insights and lessons from Scott Belsky that anyone building with AI must read. His newsletter is fantastic and a must-subscribe because of his unique cross-section of expertise across creativity, product, and innovation. His books have also always been pivotal reads to advance your craft. Hopefully, we can do some of the same with our Design of AI podcast and newsletter. Who should benefit most from your ability to learn AI: You or your employer?The challenge to creatives and builders is to decide who should benefit from these transformative technologies if you’re self-taught:* Should you gift your employer the benefits if you’ve taught yourself ways of getting 25% more work accomplished in a day?* Should you gift yourself the benefits of your increased productivity and work on side projects, or spend more time with your family?Historically speaking, employers were responsible for the means and training of production. They paid for novel technologies —desktops, SaaS, big data— and were responsible for training you on how to use them. AI is different because employers are often lagging behind employees in embracing and educating on how to use the technology effectively. It is very easy to argue that the 200 hours you’ve spent learning AI outside of work hours should exclusively benefit you.AI Time Savings: Benefits & RisksTechnologies have consistently saved us time, but the resulting effects have been questionable. The internet and mobile phones connected the world, while also leading to increased poor health outcomes due to more time sitting. We also spend more time alone than ever.Further back, the Industrial Revolution raised the quality of life for everyone. Still, the commoditization of work led to industrialists exploiting child labour and putting everyone into deplorable working conditions that polluted communities. The time the workforce saved most benefited employers, with employees giving up their ways of life in favour of steady incomes. Most relocated to cities, got cut off from their families, and learned the pain of commuting for the first time.When it comes to AI, the benefits we hope for centre on automation and augmentation. The hope is that we will benefit from less shitty work (automated away) and that we can our new capabilities (augmented by AI) will enable us all become wealthy entrepreneurs. Sure, this may be true for the top 0.01% of AI users who learn how to run a typically 10-person business by themselves. For the rest of us our work may in fact get a lot shittier. At least that’s what the authors of the upcoming book, The AI Con, believe. The authors (and upcoming Design of AI guests), Alex Hanna and Emily M. Bender tell a tale of how AI’s r

Apr 22, 202555 min

AI's Predictive Powers will Change how we Live & Work

As much as image generation is fun, the power of GenAI is prediction. The technology operates very similarly to people you might meet: * Some people have studied and are experts in a single topic for a decade. They’re experts in that topic and can easily infer, correct, and complete tasks. They’re unreliable for everything else.* Some people are generally knowledgeable and have a good understanding of many topics. They aren’t experts but can reliably assist you in many ways. But they’ll also be wrong sometimes.OpenAI, Anthropic, etc.— are highly knowledgeable in almost every topic. That’s the result of being trained on all accessible information online, data they’ve licensed, plus data they’ve allegedly stolen. AI products built on these frontier models are immediately powerful for completing any task. But if you build a point solution on proprietary data explicitly trained on a narrow topic, it can achieve an expert level. That was the focus of our conversation with Tyler Hochman, the Founder and CEO of FORE Enterprise. We discussed unlocking AI’s predictive power by focusing on expensive and repeating problems. How any business or founder can leverage and/or specialized data sets to train AI models to deliver powerful prediction capabilities.Listen on Spotify | Listen on Apple Podcasts | Watch on YouTubeHe’s built AI-powered software to predict when employees may leave their jobs, offer fashion advice, and help professional sports teams improve performance. This video explains how to train your model using Figma files.This conversation highlights how important your first party will become. This data includes more than just your customer data; it should include documenting workflows, quantifying initiatives, and developing a matrix of your offerings/capabilities. Anything repeatable must be quantified as a learning tool.Example of a data collection strategy for AI trainingWhen OpenAI launched a new image generation feature in ChatGPT, everyone jumped on it. AI-generated images infested our feeds in the Studio Ghibli style. These images sparked a lot of worthy debate about copyright infringement, which added to the ethical concerns about how OpenAI trains its model. A recent study highlighted evidence that ChatGPT is trained on copyrighted works.Given that AI models are running out of data to consume, they need to find clever ways to access a new data set. Enter ChatGPT’s image generation tool and Ghibli craze. Millions of people have been feeding their photos into the model, giving it access to an entire universe of new training data to improve the quality of its image generation capabilities. Lesson: Collecting user-generated content can provide your custom model with access to training data that was never possible before. This holds true whether your product is a document scanner, video generator, accounting software, run tracking app, or anything else. As we move into the next phase of AI model evolution, the data you have access to might become your best competitive moat. Thus, businesses with access to ethically sourced content from their communities and customers have an advantage.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Future of AI-powered workforcesYesterday, LinkedIn exploded with screenshots of an internal memo sent by Shopify CEO Tobi Lutke to teams. It marks the most public evidence that AI is moving from a toy we experiment with to a critical skill that you’ll be scored in your next performance review.The data backs up that AI adoption is surging within workplaces. A study by the Wharton School at the University of Pennsylvania collected data on which use cases AI is most used for. The report highlighted use cases that every business and employees rely on daily or weekly. Not so long ago, employees secretly used AI at work. The year-over-year data indicate that AI products are becoming adopted at an organizational level.AI’s impact on our lives will be dramatic & potentially dystopianStanford’s 2025 AI Index Report offers metrics demonstrating the significant leaps forward AI has made across performance and usage metrics. The technology has already surpassed human baseline performance on many measures. And the technology’s predictive capabilities are showcased in how effective LLM’s performance in clinical diagnosis. It points to a future where every one of us —physicians, educators, factory workers, and beyond— will rely on AI to make more informed decisions. MUST READ: Futures essay about future of superintelligenceThe AI 2027 essay, written by researchers and journalists, examines the question of what happens on a global level as we approach AI superintelligence. A long and worthy read, it illustrates that we are much closer to superintelligence than the public may believe and that the snowball effects of achieving it are massive. They predict dystopian outcomes unless the world unifies around regulations and safety guidelines.If their predictions are true,

Apr 8, 202549 min

Prepare Yourself for AI to Increasingly Change Our Jobs

“The future is already here, it's just not evenly distributed” Science fiction is inspiring, frightening, and often the best lens into the future. Many ideas about the future are b******t —just like this quote being misattributed to the ever-amazing William Gibson— but even the wildest idea shares truths worth discussing.This week’s newsletter is an exercise in imagining how AI will transform the way that we work. The future will impact us differently because some already live with a future-centred mindset, while others prefer to shift their thinking daily. One such future-centred thinker is John Whalen, the author of Design for How People Think and the Founder of Brilliant Experience. He shifted from being an AI skeptic to an advocate because he sees a tidal wave of change coming to how product teams operate.Listen on Spotify | Listen on Apple Podcasts | Watch on YouTubeIn the episode, we discuss how he’s implemented AI into his workflows and how he can now accomplish projects in one week that used to take seven weeks to complete. He makes a compelling case for why every team should use AI-moderation and synthetic users to enhance product outcomes. But most importantly, he’s become an AI advocate because, over his three-decade career, introducing new tools has always been met with doubts and resistance. Ultimately, businesses force the adoption of tools that deliver a clear ROI. There’s still much to debate about AI. Reports like this one from Microsoft continue to show that AI isn’t ready to replace humans at key tasks. Another 2024 study found that ChatGPT delivered inconsistent results on a key qualitative research task, compared to humans. The most important thing about this study wasn’t that humans outperformed LLMs; it was the significant performance improvement from GPT-3.5 to GPT-4.0. AI is getting much better at tasks that seemed unimaginable to automate. We’re hearing the same shocking stories across design, development, research, marketing, and sales. Undoubtedly, AI will be able to automate most of our work within a few years.Will that mean we’ll be replaced? Yes and no. Just like the industrial age and globalization destroyed artisans, AI will significantly reduce the headcount of “artisanal” product people and the rest of the work will be an assembly line of tool operators.Automation will significantly change many people’s lives in ways that may be painful and enduring. But for the economy as a whole, more jobs will be created, and those jobs will look different from those today.Thanks for reading Design of AI. Subscribe to receive new posts.Should we be worried about our jobs?These same conversations are happening across all fields:* Will AI Replace Therapists?* As Technology Progresses, Certain Accounting Jobs May Fade Away* The Risk of Dependence on Artificial Intelligence in Surgery* AI could terminate graphic designers before 2030You’re probably reading this with a sense of confidence that you’re shielded from the impacts of AI because you’re working on the bleeding edge of technology. It’s true. You should be better equipped to navigate the changes as they happen and adapt to the future better than others. Conversely, your roles face additional pressure to change faster than in other industries. The business realities of being backed by venture capital and private equity mean you’re always chasing the future. Tech and agencies have to unlock benefits from AI or risk losing market share and funding.The problem is that nobody can agree on AI's expected impact because it’s still just science fiction.According to the OECD report, the level of impact will largely depend on the level of adoption. High adopters might expect a 3x gain compared to those who adopt AI minimally. A McKinsey report highlights the pressure being placed on employees. Their data shows that C-suite executives blame employee readiness as a barrier to gaining benefits from AI. Only 1% of them believe their AI investments have reached maturity.Combined with last week’s conversation with Jan Emmanuele, AI investments in creative augmentation and automation will surge in 2026 and beyond. This suggests that employees will be under a lot of pressure to become more productive or else be replaced. Listen to that episode for more details on how AI is being adopted:Listen on Spotify | Listen on Apple PodcastsHow will jobs change as a result of AI?There’s no doubt that our jobs will change. They’ve had to change every time a transformative new technology becomes widely adopted. The only difference now is the speed at which change is happening.Let’s analyze how roles are changing from the perspective of product teams.* Our jobs used to be distinct. Each of us had specialties and expertise in areas that protected us.* Our jobs are increasingly commoditized, meaning people from other jobs can do many of our tasks.For example, a designer can now do tasks that previously were out of their sphere:* Use ChatGPT and Cove to explore a strat

Mar 13, 20251h 7m

Implementing AI into creative workflows: How to prepare yourself and protect your job

There are many reasons to debate the ethics and implications of AI. But while we do that, hundreds of the world’s biggest brands are rushing to implement the technology into creative and coding workflows. At a time when shareholders are being unforgiving and policy making is volatile, business leaders are looking to AI to gain any advantage possible.Jan Emmanuele is one of the experts that these Fortune 500 corporations rely on to identify and build GenAI creative workflow augmentations and automations. He works for Superside —whom you might remember from our episode with Philip Maggs (Listen here)— because they’re on the leading edge of creating an LLM that interprets your briefing process, design system, brand guidelines, marketing campaigns, and data to automate high-volume creative tasks. In this episode, we focus on how and where AI is applied within organizations and workflows. It details how organizations can prepare themselves for implementing AI and how to address the core barriers and risks of the technology.Listen on Spotify | Listen on Apple PodcastsWhat was most interesting about this conversation was his prediction that the adoption of AI will explode in enterprise orgs starting in 2026 and that it could continue into the 2030s. He believes that the value of AI in enterprise has already been proven and that more use cases exist than anyone can believe. That adoption thus far has only been limited because of legal and procurement policies.If this is true, organizations that aren’t already at least planning for this workflow-automated future will soon be at a huge competitive disadvantage. Finding 10x augmentations of creative output is routinely achieved, and more will be possible for organizations with highly-structured and easily-repeatable workflows. The gains will be largest in orgs that leverage the uniquely-LLM capability of contextualizing outputs based on data. Examples include localizing campaigns to micro-niche segments or regions of the world. Thanks for reading Design of AI: Strategies & insights for product teams! This post is public so feel free to share it.Headwinds will reduce the number of creatives earning a living wageAs we barrel towards the increasingly inevitable reliance on LLMs, it puts creatives in the uncomfortable position of fighting for their survival and protesting for what’s ethically correct. The music industry is the canary in the coal mine in this battle. Many artists earn the majority of their income from their back catalogues and LLMS are effectively using those albums as mulch to improve generative capabilities. On one side, you have an entire way of life being threatened; on the other, you have artists that will quickly need to learn how to master generative capabilities to become an indispensable musician regardless of the headwinds that will reduce the amount of music earning a living wage. As platforms get better, we’ll just generate the music and images we need instead of hiring professionals.Overcoming the uncanny valley: Not being able to determine what was generated by AIWhat has made all of us feel more comfortable has been that AI still sucks at a lot of creative tasks. Blooper reels and countless articles of AI creative generative fails give us hope that the technology isn’t ready to replace anyone yet. But we’ve learned from our latest episode and many previous ones that the technology is much more ready for primetime than we might believe. Many of the failures we see today result from the false sense of confidence the platforms offer novices. While the simplicity of these tools has exploded the amount of experimentation happening, we’re flooded with more fails than fantastic examples.Another factor is that the simplicity of the GenAI interfaces obscures the complexity happening in the background. We believe we can generate a campaign-ready 20-second video by typing in a prompt. But the complexity comes from knowing what models, protocols, data sets, and projects to connect for the best outcomes. This is an era dominated by creative technologists who can see these possibilities and stay up-to-date with the latest capabilities.In the hands of someone who understands how to overcome the rawness of the technology, the possibilities are limitless. And for every project we see published, there are at least another dozen working to push those capabilities further in the near future. Sesame is another example of technology overcoming the uncanny valley by delivering conversational voice capabilities indistinguishable from humans. These developments are happening at such a pace that it’s impossible to keep up. For example, researchers have created an agentic, autonomous framework that iteratively structures and refines knowledge in situ.The point is that whether you agree with the hype of an AI-powered future or not, businesses everywhere will implement it because the impact is increasingly undeniable. Action items: What can we do to prepare ourselves an

Mar 3, 202558 min

How Can we Design a New Relationship with AI?

Whether we admit it, like it, or believe it, we’re in a relationship with AI.That’s the first of many powerful reflections made by Sara Vienna, Metalab’s Chief Design Officer, in her must-read manifesto about how design and product must evolve. Unlike the design leaders who speculate about AI's impact, Sara and her world-class team are years ahead. They are designing disruptive AI product experiences and leveraging AI to elevate their workflows. Sara’s episode is one of the most important conversations we’ve had about the future of design and products.Listen on Spotify | Listen on Apple PodcastsShe believes that AI will change how we work and what we build. Those who embrace the potential of AI will succeed in the oncoming disruption. But most importantly, the future of product+AI will be in making five mindset shifts:They’re fundamentally principles for humanizing experiences. The hope is that AI will finally bridge the divide so products can deliver the value we’ve always wished was possible in the most humanized way possible. But there will be challenges in accomplishing this:* Most product orgs are built around the concept of delivery, not design excellence* Unlocking user data: Getting access to valuable data and knowing how to use it in a meaningful way are still more fantasy than reality* In every direction we turn, trust is being diluted* Design as we know it will need to be reborn to adapt to move from creating pixel-perfect interfaces to ones that adapt and spawn based on user interactionsAgain, I highly recommend listening to the entire episode.Thanks for reading Design of AI: Strategies & insights for product teams! This post is public so feel free to share it.Envisioning the future of design & productIf we extrapolate on Sara Vienna’s vision of how design should change, a couple of core reality checks come to mind:* Today, we can’t even conceptualize what products will be able to do tomorrow. Just like new AI tools are being released faster than we can read about them, more teams than ever are competing to deliver the use case & interaction model that will redefine a category. It’s a race to an undefined & moving finish line.* The underlying models may be the heartbeat of future products, but design will always be the brain. Products plug into whichever model suits them best at a particular moment, usually based on cost and accuracy. But just like each of our minds brings a different lived reality and way of using knowledge, the models are less important than the strategy that’s been designed into the product.* Fewer designers and product managers will yield immense power. AI automation platforms —like Make and Loveable— can effectively replicate more than half of products today. This percentage will grow until such a point that any product will soon be able to be cloned, undermining its competitive advantage. The designers and product managers working on the future of design will have the funding that enables them to compete in a global race that they’re likely to lose because they don’t know what competition they’re actually facing. The rest of us will be working to keep the lights on.Big question: How should we be using AI, today?Photoshop celebrated its 35th birthday today and is a perfect reminder of how disruptive platforms eventually become part of the boring vocabulary of the everyday.GenAI platforms, like ChatGPT, are in their infancy. Everything seems equal parts novel and confusing. We’re still unsure how to use this superintelligence, only that we should be using it. Photoshop’s rise was similar: a platform that opened up so many possibilities but whose ultimate impact wasn’t felt until it redefined the designer role many years later. What’s happening today is that employees are smuggling AI into work and this makes sense given the recent McKinsey report that finds that leaders are slow to adopt because of risks and a lack of vision.Our research finds the biggest barrier to scaling is not employees—who are ready—but leaders, who are not steering fast enough.Anthropic, the maker of Claude, published their Economic Index report and found that AI use is most prevalent in computer & mathematical occupations. Their AI model is mainly used for programming and administrative tasks.What the data also show is that design and creative tasks aren’t core use cases, yet. And rightfully so, large language models best serve requests about processing content and code, not pixels and ideas. A report about how generative AI is used in journalism showcases this by highlighting that even the creative tasks are largely operational ones, like resizing images and animating.This data highlights the divide in how leading organizations, like Metalab and Superside, leverage AI compared to the everyday user. While the average person uses Midjourney to generate stock art, leading designers automatically generate localized creative based on design systems and content guidelines.The reality is that product teams

Feb 19, 20251h 25m

AI is making Knowledge Work cheaper & easier— some will benefit huge

There’s little debate that AI will change the world. What we’re not so sure about is if AI’s expected disruptions to how we work will be outweighed by the benefits of accessing a super-intelligence.David Boyle thinks of LLMs as an electric bicycle for the mind, one that enables us to go farther than we ever imagined with much less effort. His opinion comes from being one of the first market researchers to experiment with LLMs and subsequently turn his learnings into the PROMPT series of books to help marketers, startups, researchers, musicians, and other creatives benefit from the emerging technology. He’s an audience research expert who has informed global strategies for many of the world’s biggest brands.In this episode we explore why David Boyle believes that AI can make strategy & research work faster, cheaper, AND better. Listen on Spotify | Listen on AppleThe conversation explains why any product manager, researcher, strategist, or creative should leverage AI. The greatest advantages are speed and quantity because GenAI overcomes research’s most time-intensive tasks: codifying and thematic analysis of large data sets.David admits that one of the biggest challenges is that AI are often confidently wrong and that experts must verify the results.This episode raises important questions:* If AI will make all tasks faster, what changes should we expect to our way of working? Consider how the internet is homogenizing the way we live globally.* If a human expert must verify results, how can we trust the results of AI tasks as soon as the velocity scales past the number of humans in-the-loop?* If executives are excited by AI reducing the cost of research, what will stop them from preferring synthetic or non-human verified data once the cost nears zero?Recommended articlesThe Future of Design: How AI Is Shifting Designers from Makers to Curators by Andy Budd“AI is transforming design, shifting designers from hands-on creators to curators focused on strategy” is the most common prediction about where design is headed. The author believes the design roles will evolve to where and how they can best deliver value and it will likely be in enhancing the quality of work delivered by AI. As optimistic as it sounds —hey everyone wants to be more strategic, yay!— the truth is that in this future scenario, the concept of being a design completely changes with most being dedicated to managing AI tasks and the best assigned to bespoke design tasks that must be perfect. The End of Programming as We Know It by Tim O’reillyMakes a case that each fear cycle about software developers getting replaced actually led to an evolution of the craft. He admits that “Eventually much of what programmers do today may be as obsolete” but that it will be more akin to how the old skill of debugging was replaced with roles tackling more complex tasks. As knowledge workers we have to be concerned because our work can’t be quantified and automated in the same way as the production-line model of development.AI agents will replace SaaS software by Ayan MajumdarIn this analysis of the CEO of Microsoft’s statements that "AI agents will replace all software" he breaks down common SaaS use cases and whether AI can replace those use cases. He concludes that “The shift towards intelligent agents signifies a move away from manual software interactions towards more intuitive, AI-driven processes.” Overall this is further evidence AI agents could replace the SaaS layer which often only existed to give custom lenses to your own data.AI-Generated Slop Is Already In Your Public Library by Emanuel MaibergThe enshitifaction of knowledge is now hitting libraries. Libraries, once keepers and curators of the world’s most important knowledge now can’t guarantee the accuracy, provenance, and value of many works being submitted. “My library, like most, does not have the resources to be checking Hoopla on a weekly basis to weed out what we wouldn’t want there.”What being replaced by AI in 2025 looks likeWhere does knowledge work go from here?Here’s an example of the disruptions possible today where OpenAI’s new Deep Research was used in combination with Gamma to do big consultancy-level research into a market and publish a stunning report. All in 2 minutes.Agencies & consultants: Any business that doesn’t learn to adopt AI to augment and automate workflows will be at risk of losing niche projects to competitors who are optimized for price, speed, and/or scale. Legacy and large orgs tend to be overloading team members so much to remain profitable that they will be slow to adapt to challengers who will turn AI into a major advantage in a price-sensitive market.Researchers & designers: Orgs are hungry to cut costs and will jump at the opportunity to automate rote tasks. Worse yet the entire value of design and research is becoming so commodified that at least one of your leaders will have the misguided belief that everything you do can be automated. Find a culture that v

Feb 5, 202553 min

Challenges of leveraging AI in existing products + Implications of Deepseek

Up until recently Miro was the innovator’s defacto collaboration platform. In recent years a long list of apps added similar functionality to eat away at the online whiteboard segment. Our latest episode with Ioana Teleanu, Miro’s former Lead Product Designer for AI explores the challenges and opportunities of leveraging AI to enhance an existing product.Listen on Spotify | Listen on AppleKey takeaways:* When a product experience is already good, do we need to add AI?* AI makes it easier for more products to enter your category and add unexpected competition* Adding AI forces product teams to ship quickly to be able to learn, sometimes with uncertainty attached* You must consider if AI is the right solution to the problem you’re trying to solveIf you have any questions about these or other AI questions, reach out to us and we can help you upack what it means for your product.Thanks for reading Design of AI: Strategies & insights for product teams! This post is public so feel free to share it.Next week’s podcast episode features David Boyle who makes a case for why AI is transforming what we can learn about audiences and how those insights will improve our ability to strategize.Featured articlesAI Agents: How Businesses Must Adapt or Risk Obscurity (Arpy Dragffy)Ethan Mollick is right: #AIAgents are going to fundamentally change how websites, apps, and APIs are structured. But the implications go far deeper. We’re rapidly moving from a world where users seek out information to one where it is pushed to them by AI agents acting on their behalf. This shift has profound consequences for businesses of all sizes, and those that fail to adapt risk disappearing into the noise that these agents must sort through. (Read full article)The Wild Future of Commerce & The Rise of Conformative Software (Scott Belsky)This edition explores forecasts and implications around: (1) wild expectations for the future of commerce, (2) the era of “conformative software” that becomes more tailor-made as you use it, and (3) some surprises at the end, as always. (Read full article)25 Themes for 2025 (Bronwyn Williams)A cheat sheet of the 25 top things I'm watching unfold in 2025 (See presentation)Always remember… a good learner appreciates being proven wrongBiggest story of the week: Deepseek Ovetta Sampson is one of the must-follow voices in AI, bringing a rational perspective to an otherwise nonsensical chorus of voices. Read the full article here.Deepseek is the new Chinese model that is the biggest AI story of the year (so far). Yes the model is Chinese and may be compromised. But what’s most compelling about this story:* Despite the US places restrictions and pumping country-sized funding into AI, the model outperforms every model made in the USA.* Just like China has done in countless other industries (e.g. Shein and Huawei), they create copy-cat products that deliver 80% of the value for significantly lower cost.* We went into 2025 thinking OpenAI won the AI model wars that we’d all be subjected to whatever pricing they forced upon us. WRONG. Deepseek and many more will now come along and chip away at that expectation. More analysis about China’s disruptive Deepseek model:* Why DeepSeek Prompted a $1 Trillion Tech Sell-Off (Business Insider)* 🐳 I just finished a deep dive into DeepSeek’s latest R1 model & paper. I am genuinely impressed by their approach. A few thoughts. (Reuven Cohen)* Running DeepSeek r1 32B locally is kind of depressing. (Ethan Mollick)* DeepSeek is out, the Stock Market crashed and Silicon Valley is in tears. Here's what happened in the last 48 hours and what it means for your business (Tobias Zwingmann)* The labour market over the next few years is going to be a complete 💩show thanks to super smart, super cheap open source Ai. Here’s why. (Reuven Cohen)* What if I told you the $500B Nvidia selloff is missing the point entirely? I believe AI Revolution Just Had Its "iPhone Moment" (Simon Taylor)New AI products worth trying outStorm Stanford’s new platform enables you to build a paper about nearly any topic with summary and references. Helpful to get a deep dive into big topics, from your desired perspective.RileyNew market research platform which guides users through questions about their product, competitors, and data available. As a proof of concept of where things may head it is compelling.Thanks for reading Design of AI: Strategies & insights for product teams! Subscribe for free to receive new posts and support my work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jan 28, 202550 min

We're too obsessed with AI's potential that we forget the challenges

Healthcare is constantly highlighted as the industry that will benefit the most from AI. The prospective opportunities are endless: Improve access to services, improve quality of service, patient outcomes, and medical research. An analysis predicts that the healthcare could save up to $360B a year by implementing AI.That’s we invited an expert to discuss what other industries can learn from healthcare’s massive AI opportunity. Spencer Dorn, the Vice Chair and Professor of Medicine at the University of North Carolina. He is a contributor to Forbes and one of LinkedIn’s Top Voices speaking on Healthcare + Innovation.Listen on Spotify | Listen on Apple PodcastsKey takeaways from the episode:* AI has been impacting healthcare for years, especially to create Electronic Health Records (EHS) as a way of centralizing information* AI is being explored today as assistants to medical professionals (e.g. Virtual/digital scribes) and across a variety of diagnosis scenarios (video)* But the rollouts have been plagued by consistent issues related to adoption and poor comprehension of the actual problems* To get EHS implemented EHS it needed an Obama-era law and incentive plan* Many of the initiatives aiming to speed up access to healthcare and diagnosis are undermining the relationships across the journey of being a patient * Technology is rarely the solution because the problem is typically bureaucracy, culture, lack of incentives, and externalitiesLessons for you:* Beware complexity: Most of AI products being sold by major corps and consultancies are ones solving micro-problems and not designed to tackle complex problems* Worry about adoption: It doesn’t matter how brilliant your solution is, getting buy-in and adoption within enterprises will be the most pressing challenge* Think of problems as systems: JTBD and user stories have a tendency of over-simplifying problems and underrepresenting the range of factors, dependancies, and implications of a problem on the system as a whole* Ethnography is key: If you want to make a positive change to a problem space you need to leverage deep qualitative research techniques, like ethnography, to document and assess what matters and why* Monitor for unintended consequences: Even after dedicating lots of time to research and planning, we must be monitoring for unintended consequences that may create more work or more anxiety for those stakeholders within the system.Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.Challenges building truly human-centred AI products and solutionsAI thought leaders love to push this message of getting to the future quickly. It creates this narrative that we’re all falling behind.But let’s slow down and recognize that there are countless of questions to be addressed before throwing everything out in favour or the shiny new system. This paper from Microsoft explored the many questions that users are posing about using AI agents. And these are very important questions that every team should be able to answer clearly to their users before deploying any solution.This poll from Google’s former Chief Decision Scientist highlights that the technical part of implementing AI is no longer the biggest barrier, understanding humans is. If the organizations polled —ones who have successfully implemented AI— are struggling to identify good opportunities and to convince people to use it, then imagine what struggles an everyday org will have.And also worth considering that AI adoption is still much lower than we’d expect given all the hype. The implementation of aI —especially across large orgs— may takes a decade or more because we’re fundamentally asking teams to change the way they work. Moreso, those in regulated industries need the permission to change how they operate before they can even consider implementing AI products.In the background many workers are using AI without their employers’ knowledge, leading to an endless range of potential risks.Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.Mindset shifts to help implement AI In the podcast Spencer kept highlighting that we need to go into problem spaces with humility and without the expectation that problems are easy to solve. Other guests have suggests other types of mindset shifts:* Jess Holbrook stated we need to be specific when talk about AI: Too many projects are built off of expectations, not specifications of what AI should do and how* Kristie J. Fisher believes we need to measure time well spent using AI: The best solution to adoption problems is making sure that the AI product delivers value AND time well spent* Josh Clark advocated for embracing the weirdness of AI: The imperfectness of AI outputs should be viewed as a creative and innovative feature to help you explore new directions* Phillip Maggs challenges us to imagine new possibilities with AI: This is your time

Dec 4, 202452 min

AI is reshaping business & shaping a new future | Author of "AI Value Playbook" joins us

In our latest episode, Lisa Weaver-Lambert dispels the belief that is incapable of delivering impact in her book "The AI Value Playbook." She also lays out principles for succeeding in your implementation of AI:1. Your tech stack determines winners: Orgs that already were built to process and leverage data as part of core decision making are at a huge advantage. Especially those that are focused on leveraging insights to learn and iterate.2. Leadership and strategy matter: The vision, guiding principles, and culture matter. They will dictate the strategy or lack of a cohesive strategy.3. AI shouldn’t be added on top: AI should be viewed as the pathway ro removing layers, friction, and complexity.4. Getting from proof of concept to value is harder: AI reduces the barrier to creating proof of concepts while also layering in a lot more uncertainty about how to make it production-ready.5. Centralize AI strategy & decentralize implementation: Orgs should have a cohesive strategy owned by a centralized team. But the workflows and use cases defined by the teams that are seeking to gain specific value.Listen on Spotify | Listen on Apple | Watch on YoutubePlease rate the podcastIf you’ve listened to the podcast, please help us by giving us a rating. It helps us get in front of more people and know that what we’re publishing is delivering value.Rate us on Spotify | Rate us on Apple PodcastsAnd if you have comments, questions, or suggestions: [email protected] New report showing use of Anthropic (Claude) doubled, while OpenAI lost 1/3Menlo Ventures published their 2024 report: The State of Generative AI in the Enterprise. It shows the continued maturation of the AI market and clear use cases where the tech is being leveraged. Not surprising, task-level use cases that can be directly evaluated/audited are coming out on top. Also, the layers of AI stack are becoming more distinct with some products starting to create their own moats. As we move into 2025 expect the Data layer to split as more orgs realize that they need a semantic layer to structure and make sense of first-party data.Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.The LLM market share data makes OpenAI look like the big loser. But I suggest throwing out the 2022 and 2023 data since adoption was so low and leveraging the tech for experimentation rather than impact. 2024 is the year when AI became the workhorse for the first time powering countless products. Nonetheless, it is compelling to see Anthropic and Claude shoot up. Their focus on UX seems to be paying dividends, that or OpenAI’s dilution of trust is.Of no surprise, prompt engineering is falling off a cliff. It was a bandaid approach for a tech that had no standards yet. For reference a business that built their product through prompts often had to rebuild all those prompts whenever a model was updated. Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.AI use & impact assessment surveyPlease share your experiences and point of view in our year-end AI research study.Your lessons and opinions will shape a critically important assessment of how & if AI is positively impacting individuals and teams.Less than 5-minutes of your time will help us a lot.Perplexity is one-upping Google by introducing AI-powered shopping journeysPerplexity, the upstart GenAI search form is firing shots at Google by taking a refreshing look at shopping. Rather than focusing on someone searching for a product (e.g. Patio furniture), they are taking a very human-centred approach by focusing on what a user is trying to accomplish (e.g. renovate my outdoor living space). The platform then provides ideas, support, and instructions. Plus, recommends products to buy.While this is immensely helpful, it brings up the ever-present concern that AI will pick winners and losers for us. Where Google served up dozens or hundreds of results and encouraged us to make our own decisions, AI only shows a handful of options. This is the beginning of the platform as expert and it could change how we interact with the world in a huge way. It could lead to small merchants being shut out or even grow distrust of options that aren’t recommended by a platform.Alarming data showing that achieving AGI could destroy market wagesEconomics at the International Monetary Fund have modeled data that shows that if Sam Altman & crew succeed at bringing AGI to the world faster than expected, it could set into motion a total destruction of market wages (aka devalue everything).Their model also showed that on the expected timeline of AGI, wages will continue to rise as humans continue to do the thinking for the machines.Read the reportThanks for reading Design of AI: News & resources for product teams! Subscribe for free to receive new posts and support my work. This is a public episode. If you would like to discuss this with other subs

Nov 22, 202452 min

How AI mature is your organization? And what are the implications of it?

The last two years have been extremely stressful for anyone working in tech. There’s been a consistent sense that we all need to do more with less. That our jobs are on the line. And now AI is being touted as the cheat code that will unlock productivity and profit gains.In our latest podcast, Peter Merholz (add him on LinkedIn) doesn’t see AI helping much in the short-term because teams are too over-tasked to believe they have the time to try new models of working. He also believes that most organizations don’t have cultures and leadership that promote experimentation and reward learning. Listen on Spotify | Listen on Apple | Watch on YoutubeWhat makes matters worse is that simply “using AI” won’t get you the results you need. Simply using ChatGPT or Claude will not give you and your business a significant boost because data is at the heart of AI. The more of your first-party data that you train models on and the more that you craft agents around specific workflows, the closer you’ll get to what AI acolytes are selling. Accenture calls this AI maturity: Advancing from practice to performance. And this is where Peter Merholz believes that most orgs will be blocked. His experience working in mega-corps has found that most aren’t learning cultures. Introducing new tools, mental models, and ways of working aren’t well-received. AI use & impact assessment surveyPlease share your experiences and point of view in our year-end AI research study. Your lessons and opinions will shape a critically important assessment of how & if AI is positively impacting individuals and teams. Less than 5-minutes of your time will help us a lot.Valuable lessons 💡 Nearly half of workers are uncomfortable admitting to their manager that they used AI for common workplace tasks💡 Evaluations —or “Evals”— are the backbone for creating production-ready GenAI applications. 💡 Ten lessons that separate impactful training from mere AI showcases💡 Even teams actively working with AI are wrestling with fundamental knowledge structuring challenges. The tools are advancing faster than our practicesThanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.Exciting AI jobs👉 USA | Anthropic | Strategic Product Management👉 USA | World Economic Forum | Head of Data and AI Innovation👉 USA | Google DeepMind | Group Product Manager, Generative AI Tools for Music Creators 👉 USA | Amazon Web Services | Generative AI Strategist, Generative AI Innovation Center👉 Australia | Canva | Creative Technologist (Gen AI)👉 Canada | Autodesk | AI Research 3D Dataset Creation & Annotation Manager 👉 Canada | Robinhood | Staff Product Designer, AI Investing👉 Canada | McAfee | Sr Product Manager, GenAI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Nov 13, 20241h 2m

Phillip Maggs maps the future of design + 20 lessons from our first 20 episodes

Speaking to Phillip Maggs on Design of AI had so many💡 moments:1. Want to use AI to get a career advantage? Consuming AI content isn't enough to get ahead, you need to experiment with the new material. Stretch what you believed was possible and you'll gain new capabilities.2. New careers and role are being defined right nowGenAI makes it possible for anyone to quickly learn about a topic or skill. You might think you're average but can quickly put together a unique skill profile that makes you a unicorn, especially if you're more committed to being curious about new technologies and how to leverage them.3. Much of design should be automatedWe forget that a lot of design tasks are literal assembly-line outputs: Banners, emails, ad variants. These rightfully should be automated because they exist in the world for such a short period. However, assets that represent your brand to millions or which will be in market for years must be hand-crafted.4. Design systems and brands are rulesThe more we codify what our products and brands should be, the more we unlock the augmenting powers of AI. Phillip imagines that a day will come when the LLMs about our brands will shine light on ideas we otherwise wouldn't have considered because of our own biases.5. A lot of AI design products are "party tricks"Sure a tool that can generate designs based on text prompts are cool but are they significantly saving time? Are they aware of what qualifies a good output for your brand? Do they understand how you communicate with customers? The outcome of these tools likely is not a significant ROI.Listen on Spotify | Listen on AppleAI tool of the week: Cove.aiCove.ai is like Miro meets Claude. You can prompt and build assets, just like in Claude. But what makes this tool fascinating is that you can save our work to a visual board and invite others to collaborate with you. The most surprising finding from using this platform is recognizing that in a typical project I’m outputting so many assets. The volume makes infinite scroll interfaces painful, and even makes Claude Project’s interface seem deficient. The visual board interface is much more functional since I can sort dozens of cards into a work surface that makes sense.Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.Our First 20 Episodes: 20 Lessons for How to Advance Your Career in the Era of AIWe’re being taught to fear AI and how it is expected to impact our jobs and workplaces. But our guests see distinct opportunities for us to embrace this time as an opportunity to advance our careers. Lesson 1: Embrace AI as a tool to enhance creativity, not replace ItMaarten Walraven-Freeling, our guest on Episode 3, highlighted how AI tools like AIVA and Google Deep Mind's LIA can empower musicians to generate new music and expand their creative possibilities. Rather than fearing AI as a threat, musicians can leverage these advancements to enhance their craft and explore uncharted artistic territories.Episode: The future of music in the era of generative AIListen on Spotify | Listen on AppleLesson 2: Understand the evolution of AI interfaces to design better productsIn Episode 4, Emily Campbell traced the history of AI interfaces, from early chatbots to voice assistants and brain-computer interfaces. By understanding this evolution, product teams can better anticipate future trends and design AI products that are intuitive and user-friendly.Episode: How AI is reshaping UX and the new role for designersListen on Spotify | Listen on AppleLesson 3: Address the copyright challenges posed by generative AIVirginie Berger, in Episode 5, shed light on the ethical and legal implications of AI models trained on copyrighted data. Creatives, businesses, and policymakers must work together to establish fair compensation models and licensing frameworks to protect artists' rights in the age of generative AI.Episode: GenAI's copyright problem: Training & derivative copiesListen on Spotify | Listen on AppleLesson 4: Prioritize problem-solving over technology when building AI startupsBen Yoskovitz, our guest on Episode 6, emphasized the importance of focusing on real-world problems and customer needs rather than solely on AI technology. Startups that prioritize solving genuine challenges are more likely to achieve product-market fit and attract investment.Episode: Venture building: Why AI products may fail Listen on Spotify | Listen on AppleLesson 5: Approach emerging technologies as an enabler of people, not magicIn Episode 7, Dr. Llewyn Paine cautioned against blindly embracing the hype surrounding emerging technologies like generative AI. To find the value of a technology we need to understand how people and teams work. The most valuable opportunities are buried in behaviors and assessing what they’re willing to adopt.Episode: The secrets to researching potential emerging tech productsListen on Spotify | Listen on AppleLesson 6: Leverage AI

Oct 25, 20241h 1m

Sentient Design: Should we be chasing weirdness and divergent ideas?

GenAI’s promise is that digital experiences will become more intelligent. Big Medium Founder Josh Clark and his daughter, Veronika Kindred, are the authors of the upcoming book “Sentient Design” and the latest guests on the podcast. They see products that are radically adaptive to our situational needs and collaborate with users in ways that seemed insane a few years ago. Listen on Spotify | Listen on Apple PodcastsBut what struck me the most were three things:* Veronika, a GenZer who figuratively grew up inside of tech because of her father’s work, sees the role of AI much differently than what us older folk would expect. There’s an awkward comfort with the centralization of power within these systems and the expectation that we, the users, will decide whether it is used for good or bad.* Not building towards personalization. Josh knows that it requires far too much data for a system to understand us and what we truly need. So they’re better suited to inferring where we are in our journey, making assumptions about what might have changed about us, and adapting to meet us where we are.* Josh is a champion for embracing the weirdness of AI. Rather than be intimidated and worried about hallucinations, use the not-so-perfect technology in ways that provide unexpected results. The counter-point to intelligent products continues to be how much intelligence a user wants and how much personal information they are willing to give up for it. There’s nothing more uncomfortable than a salesperson who doesn’t get your signals.Adobe’s Project Concept is the start of something hugeEmbracing the weirdness is exactly what Adobe’s new product, Project Concept does. Better you watch the video than me try and explain. It will be interesting to see how agencies respond to the further commoditization of their expertise.Always remember, GenAI is great at the boring stuffAmazon, in its quest for greater efficiency, has developed new systems to shave seconds off each package delivery and to help customers make faster buying choices, even for new product types that they may know little about. The company announced Wednesday it has created spotlights within its trucks to guide delivery people to packages for each stop along a route."When we speed up deliveries, customers shop more," said Doug Herrington, CEO of Amazon worldwide stores in remarks at the event. "Once a customer experiences fast delivery, they will come back sooner and shop more."Interestingly, this also highlights the tech’s ability to imagine solutions to problems that humans may not be able to see otherwise. You could call that embracing the weirdness again. We’ll go into this conversation in detail when we interview Lisa Weaver-Lambert, the author of The AI Value Playbook. In the book she interviewed business leaders to document exactly where and how AI has been delivering value.Multi-modal AI: 8 ways computer vision will change our livesWhile GenAI has been monopolizing the headlines, Apple, Meta, and Snap continue to invest in augmented reality headsets. Apple's Vision Pro landed with a thud —largely due to the price and home-bound use cases— but the others stirred buzz because they focused on lightweight and fashionable eyewear (courtesy of their partnership with Ray-Ban).We've been here before though. Google Glass famously failed. And no one remembers Snap's previous eyewear.But now is different.AI researchers have made huge advancements related to computer vision. If AI enables computers to think, computer vision enables them to see, observe and understand.Continue reading the article on LinkedIn…jmlg1PjBPcyf8mwPJYsfWant to join as a contributor?Contact us [email protected] to help us collect the best resources about how AI is shaping the world around us.Thanks for reading Design of AI: News & resources for product teams! Subscribe for free to receive new posts and support my work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Oct 15, 20241h 8m

Playstation's Kristie J. Fisher + Guide to designing a GenAI product

In this newsletter:* Podcast episode with Kristie J. Fisher, PhD, the Sr. Director of Global User Research, PlayStation Studios.* Guide to designing a GenAI product: From vision to content strategy* Poll for the AI communityThe biggest challenge facing AI products isn’t whether they would use your product, it’s whether you’re delivering reasons to convince them to switch from their existing solution. This is extra difficult when leveraging an emerging technology, like GenAI, because of key factors:* GenAI tools ask users to give up control and have faith that the system knows what’s right—the exact opposite of what we’ve been training users to expect from productivity tools* GenAI is still nascent and doesn’t always get it right, meaning that in some situations it will deliver an inferior output (and need to be re-prompted)* Users quickly run out of ideas about what to prompt because they don’t know what the tech is capable ofSo as much as product teams can focus on the incremental delivery of value to users, those efforts are likely to fail because we’re asking users to take a leap of faith. Something that users, especially B2B and enterprise, don’t want to do.Thanks for reading Design of AI: News & resources for product teams! Subscribe for free to receive new posts and support my work.That’s why this week’s episode with Kristie J. Fisher, PhD was so fascinating. Having worked on launching new products and features at XBox, Google, and Playstation, she has learned how to dive deeper into the psyche of users and gamers. In there is the secret to making a product enjoyable: defining metrics to ensure a user’s time is well spent.When building and researching we must be committed not only to delivering value, but ensuring that the experience is enjoyable and worth changing your workflows for. So when building your GenAI product, always create evaluative metrics for the level of impact. The higher you score, the more likely a switch. It also offers and opportunity to qualitatively investigate where and how the impact is happening so you mine valuable product ideas.💡 Have questions about your GenAI project, post them on the Design of AI LinkedIn page.💡 Or contact me via email to privately discuss your projectKristie J. Fisher, PhD, has spent the last 15 years conducting user experience research and building and leading research teams across a variety of product domains, primarily in gaming. She currently leads the global PlayStation Studios User Research team. The mission of her team is to empower PlayStation's Studios to get to great faster by being vision-led and data informed. At Google she worked on Stadia, Gmail, and Ads and was a co-author of Google's People + AI Research Guidebook. Prior to Google she was at Xbox Research, collaborating with game producers and development teams to improve player experience on Xbox, Xbox Kinect, and Windows.Guide to designing a GenAI product: From vision to content strategyWorking with GenAI requires designers to shift their mental models from deterministic to probabilistic output. Not only are you working with a new material, the technology is so new so there aren't any best practices (yet).This guide is an overview of the technology and lessons I've learned in my own AI consulting projects working at PH1 Research and from the amazing experts we've had as guests on the Design of AI podcast (Spotify - Apple).🎯 Continue reading the guideSections in this guide* Background & reality-check* Rationale for AI* AI product vision* AI product strategy* AI product principles* Design's role in crafting GenAI products* Content strategyPoll: We want to help our community better so we can deliver better resources. We started Design of AI to help teams quickly learn how to best leverage hashtag#GenAI. In the coming months, we're launching some initiatives to improve knowledge sharing to address concerns we've heard:- Lack of archive of products/tools/features others have built- Lack of best practices- Lack of visibility on why initiatives have failed- Lack of mentorship & sense of doing it all aloneIf you have any questions or want to help with building out resources for some of these, contact us [email protected] for reading Design of AI: News & resources for product teams! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Sep 26, 202449 min

Spotify’s former data alchemist: Evaluating when & how to use GenAI

Episode 17. Our guest is Glenn MacDonald who was Spotify’s Data Alchemist, building it into an algorithmic powerhouse.We’re critically evaluating algorithms' effectiveness and why GenAI probably isn’t the best technology for many problems.Some key insights:#1. As Spotify's former data alchemist, I expected huge advocacy for hashtag#ML & hashtag#AI as a predictive technology. Instead, we must not play god with algos. They should be assistive tool to get people to where they're headed. Prediction leads to errors.#2. You must be able to evaluate algorithms. Too often we're deploying fancy tech with no way to know it is performing better than an alternative. hashtag#GenAI has a huge risk of this because the assumption is that it solved everything. But the cost of deploying it is also very high."I think the main thing I've learned Is actually not to think about it as prediction, I think the thing that happens to you when you start thinking about things as prediction, and I think this applies to thinking about LLM, LLM outputs as predicting text. It also applies to A& R and music as like predicting hit artists. The moment you start thinking about it as prediction, you've sort of internalized sort of ugly idea that the future is kind of determined and you're just attempting to guess what it's going to be and thus profit by anticipation. And I think it's a lot more productive to not think about the future as something you're predicting, but it's something you're making. ""I think a lot of the time we evaluate new tech against really Poor baselines, like against randomness or against the most popular things, or like you said, against just like our intuitive guesses. And in those contexts, sometimes the fancy tools seem like, Oh, they're clearly better. But then when you compare them against, Oh, what if we just did some math and you realize. Oh, the math's even better. It's a lot simpler. "The episode is hosted by:Arpy Dragffy Guerrero (Founder & Head of product strategy, PH1 Research) https://www.linkedin.com/in/adragffy/Brittany Hobbs (VP Insights, Huge) https://www.linkedin.com/in/brittanyhobbs/Glenn McDonald is a music evangelist, algorithm designer, software engineer and technology strategist. He created the music-exploration website Every Noise at Once, and for 12 years was the Data Alchemist at the Echo Nest and Spotify. He has written about music online since before "blog" was a word, and his first offline book, You Have Not Yet Heard Your Favourite Song: How Streaming Changes Music, is available now from Canbury Press.00:24 Meet Glenn MacDonald: Spotify's Data Alchemist01:50 The Evolution of Music Discovery08:39 The Role of AI in Music and Beyond13:29 Challenges and Future of AI in Music29:14 Navigating AI in the Workplace31:25 Designing User-Friendly Algorithms34:59 Challenges with Algorithmic Recommendations39:42 Evaluating AI and User Testing47:41 The Future of Music and AIThank you for listening to the Design of AI podcast. We interview leaders and practitioners at the forefront of AI. If you like this episode please remember to leave a rating and to follow us on your favorite podcast app.Take part in the conversations about AI https://www.linkedin.com/company/designofai/And subscribe to our newsletter for additional resources https://designofai.substack.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Sep 18, 202459 min

Service design of AI: Designing the first Copilot w/ Microsoft & OpenAI

Our guest is Yasemin Cenberoglu, who was the first designer to work on Microsoft’s Copilot, all in secret, before the world was exposed to ChatGPT for the first time.Yasemin is a Principal Design Manager at Microsoft, leading the Copilot product for Teams Meetings, Calling, and Devices. She’s the first designer to shape what Copilot is today. Previously, she served as the Director of Design at Digitalist. Yasemin is an advisory board member at IDEA School of Design at Capilano University. She studied in Germany and then at Cal State, in the Bay area.00:49 Yasmin's Background and Role 02:09 Design Differences: Europe vs North America 03:44 Service Design Methodologies 03:58 Co-Creating with OpenAI 04:38 Blueprints and Customer Journeys 05:27 Rapid Prototyping and Testing 06:20 Reconnecting with Yasmin 07:06 The Excitement of Innovation 10:04 Defining Value Drivers 11:50 Building High-Level Scenarios 12:49 Managing Feasibility and Vision 15:53 Lessons Learned from GenAI 21:05 Testing and User Feedback 22:51 Iterative Design and AI 31:52 Building Trust in AI 34:12 Service Design in AI 39:11 Deciding Between Co-Pilot, Agent, or Chatbot 43:41 Future of Assistive Software 47:27 Advice for Aspiring AI DesignersEpisode is hosted by:Arpy Dragffy Guerrero (Founder & Head of product strategy, PH1 Research) https://www.linkedin.com/in/adragffy/ Brittany Hobbs (VP Insights, Huge) https://www.linkedin.com/in/brittanyhobbs/Thank you for listening to the Design of AI podcast. We interview leaders and practitioners at the forefront of AI. If you like this episode please remember to leave a rating and to follow us on your favorite podcast app.Take part in the conversations about AI https://www.linkedin.com/company/designofai/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Sep 5, 202451 min

Futures design: Build AI products that customers want & finding use cases

How should product teams be leveraging GenAI? Product teams are struggling to find the use cases which deliver the most value to customers and where the technology can be effective. And teams that have built AI products are finding that there’s often a mismatch between what customers find valuable and what the technology can do. Our guest is Arpy Dragffy Guerrero, the founder of PH1 Research where he has consulted Spotify, Microsoft, Mozilla, National Football League, to research and strategize how to best leverage emerging technologies. He’s worked on products across AI, personalization, Web3, location-sensing, and more. His focus is creating product & testing strategies to quickly pinpoint where the best opportunities are for new products. Follow him on social:⁠https://www.linkedin.com/in/adragffy/⁠⁠https://twitter.com/arpyd⁠Arpy maps out Futures Design: How to build AI products that customers want. We discuss strategies for product teams:‣ Learning from failure & the struggles of early AI‣ The challenge of identifying the impactful use cases of AI‣ The importance of value drivers (& why they aren’t JTBD)‣ Applying systems thinking to AI products & strategies‣ People hate chatbots —agents will open new possibilities‣ Examples of how agents could transform use cases and rolesPlease subscribe to: Design of AI: The podcast for product teams, on Spotify, Apple podcasts, Youtube, substack. We interview leaders and practitioners at the forefront of AI to help product teams navigate where and how to leverage AI.Substack newsletter ⁠https://designofai.substack.com/⁠ Join the conversation on LinkedIn ⁠https://www.linkedin.com/company/103164463/⁠This Design of AI episode is brought to you by PH1: A research & strategy consultancy that helps clients build AI products that customers want https://ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Aug 7, 202446 min

Researching & building responsible AI within tech’s biggest platforms

What is the path to building responsible AI products? We have a special guest: Jess Holbrook, the Head of UX Research for Microsoft AI.We discuss:‣ Responsible AI: What it is and how orgs need a clear vision for it‣ Data transparency: Ensuring you are communicating appropriately‣ Becoming one of Google’s first user researchers working on machine learning‣ Philosophical differences to user research at Google, Meta, and Amazon‣ Bridging academic research and the practical development of AI products‣ The paradigm shift that big tech is expecting AI to deliver‣ Why the last thing you should want is a user over-trusting your productAs one of the first user researchers working on AI products, Jess offers a deep and informed perspective on the challenges and opportunities of working with this new technology. He challenges organizations to build values into their products, unwaveringly and without vagueness. Jess Holbrook is the Head of UX Research for Microsoft AI. Prior to that he was Director of UX Research for Generative AI and Responsible AI at Meta. He got his start in human-AI research about 10 years ago at Google where he was a founder and lead of Google’s People + AI Research group (PAIR). Prior to joining Google, he was a UX Researcher at Amazon and Microsoft. He received his Ph.D in Psychology from the University of Oregon and a B.S. in Psychology from the University of WashingtonFollow Jess: https://linkedin.com/in/jessholbrook/ https://x.com/jesssconResources mentioned by Jess:https://pair.withgoogle.com/https://research.google/teams/responsible-ai/https://runwayml.com/Please subscribe to: Design of AI: The podcast for product teams, on Spotify, Apple podcasts, Youtube, substack. We interview leaders and practitioners at the forefront of AI to help product teams navigate where and how to leverage AI. Have questions? Join the conversation in our LinkedIn community: https://www.linkedin.com/company/designofai/ Hosted by: Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/ Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ This Design of AI episode is brought to you by PH1: A research & strategy consultancy that helps clients build AI products that customers want https://ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jul 18, 20241h 0m

Unlocking AI product success: Coaching teams through uncertainty & design risks

AI is changing the role of the designer and shifting how product teams succeed. We have a special guest: Scott Jenson, formerly from Apple, Google, and Frog Design.We discuss:* Why designers feel like their entire job will go away* What advice he offers to the teams and individuals he coaches* How AI is over--hyped and where it will have impact* Lessons from working at the forefront of mobile technology* Why Google, Apple, Meta, Microsoft are all racing to get there first* Recommendations to build successful products todayThis conversation is more of a coaching session for the designers, researchers, and product teams trying to navigate this time of great change.We try and cut through the hype to distill out key lessons that will help you all in your careers.Scott Jenson has worked in user interface design and strategic planning for over 30 years. The first member of the System Software Human Interface group at Apple in the late 80s, working on System 7, the Apple Human Interface guidelines and the Newton digital assistant. After Apple, was a freelance design consultant, doing work for Netscape, Mayo Clinic, American Express, and several web startups. Then director of product design for Symbian, then managed Mobile UI design at Google for 6 years. Left to become creative director at frog design for 2 years but returned to Google to explore advanced UX concepts for IoT and Android at Google. 35+ patents. https://www.linkedin.com/in/scottjenson/Please subscribe to: Design of AI: The podcast for product teams, on Spotify, Apple podcasts, Youtube, substack. We interview leaders and practitioners at the forefront of AI to help product teams navigate where and how to leverage AI.Have questions? Join the conversation in our LinkedIn community: https://www.linkedin.com/company/designofai/Hosted by:Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/This Design of AI episode is brought to you by PH1: A research & strategy consultancy that helps clients build AI products that customers wanthttps://ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jun 28, 202456 min

Content design: How creatives are leveraging prompt engineering to innovate ecommerce platforms & improve brand-building

This conversation is a deep case study into what the capabilities of the technology are today and how product teams must leverage both creative experts and these emerging technologies, side-by-side. Our guest is Trisha Causley from Shopify.Topics we discuss:▪ Why Trisha went from an AI skeptic to a champion▪ What types of creative tasks GenAI is best at▪ Tactical lessons for leveraging GenAI across product experiences▪ Why prompt engineering must become part of your toolkit▪ Shopify’s plan to leverage GenAI to scale & personalize brand-building▪ Why GenAI enhances the role of creatives by expanding what you doTrisha Causley is a Senior Staff Content Designer at Shopify in Toronto, Canada, where she works on AI-powered product features. She previously worked with IBM and on the Watson team. https://www.linkedin.com/in/tcausley/The Design of AI podcast is available on Spotify, Apple Podcast, and Youtube.Have questions? Join the conversation in our LinkedIn community: https://www.linkedin.com/company/designofai/Subscribe to the Design of AI podcast for more in-depth resources for product teams.Hosted by:Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jun 19, 202445 min

Innovation lessons for brands and product teams investing into AI

Why are brands investing into AI? How can they succeed? What can we learn from how experts in the field of innovation lead transformation projects? Where will AI actually deliver impact in the near term? Joining us is Nick Sherrard, who is involved in these conversations across Fortune 500, government, and startups.He is a co-founder of Label Sessions, the global innovation expert network, and Label Ventures, the venture studio. He is also a board member at Substrakt, the digital agency, and Collective art gallery in Edinburgh. Nick is often said to be the only person to have run an innovation lab inside a bank, a government department, a big 4 consultancy and a circus. His approach to making change happen in organisations fuses his more classic brand and product development background, with the devising mindset of arts producer. Nick advises boards and entrepreneurs globally.In this episode we cover:* Top-down and bottom-up approaches to leading AI projects* History of art and innovation is the history of rejection* Leaders of AI projects often don’t anticipate what’s needed* The problem with design thinking when building AI products* How the creative & consulting worlds are enhanced by AI* Use cases where AI will have impactAlso find us Apple Podcast & SpotifyHave questions? Join the conversation with other product leaders on LinkedIn https://www.linkedin.com/company/designofai/Subscribe to the Design of AI podcast for more in-depth resources for product teams. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jun 12, 202453 min

AI is disrupting the design & product delivery process [Lessons for startups, enterprise & UX]

Building products with GenAI brings powerful new capabilities but also a whole new set of uncertainties. Teams can't rely on best practices because the technology is changing so quickly and users are cautiously adopting change. Designing and shipping products can no longer be thought about as a linear process.Alexandra Holness, Senior Lead Product Designer at Klaviyo, joins to share lessons, cautions, and a path forward to help product teams build AI products that customers want. She sees that successful product teams will depend on designer, data scientists, engineers working more closely than ever because it is very hard to predict how customers will use models until you've shipped them.Topics discussed:* How she created her role leading AI design * Assumptions the team had about how to leverage AI * What works and doesn’t from a design perspective* AI models being so nascent that its hard to design a UX* Designers-data-engineers working together in new ways* Building AI products is very different than traditional * Building effective AI products requires culture change* Why you need to test out potential futuresHave questions? Join the conversation https://www.linkedin.com/company/designofai/Subscribe to the Design of AI podcast for more in-depth resources for product teams. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Jun 4, 202448 min

AI can innovate behavior change strategies & transform personalization

Dr. Amy Bucher literally wrote the book on behavior change.She joined the podcast to discuss how hashtag#GenAI can transform what tech is possible of achieving on a human outcome level:- How AI can open entire new possibilities for behavioural change and lead to monumental outcomes- Opportunities and risks of leveraging AI personalization- Reinforcement learning and what it is- Objective-driven AI and how we should start focusing more on outcomes - Why wearables may open new possibilities- Considerations around proprietary vs. commercially-available AI- And - Why having a AI scientist will be critical for any team and that it may not be as hard to hire for as you think This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

May 28, 202456 min

Case studies: Leveraging AI to build conversational bots & analyze conversations [Design of AI podcast]

How can AI make our workflows and products more effective? It’s a question every product team is asking itself as they decide to invest into developing or licensing products. Let’s learn from two building and leveraging AI today.Two case study presenters from the upcoming Rosenfeld Design with AI Conference (June 4 & 5) will be with us to detail out how they leveraged GenAI. Savannah Carlin, Staff Product Designer at Marqeta, will detail how to design conversational interactions with AI. Weidan Li, the Design Research Lead at SEEK.com, will outline AI’s performance in analyzing qualitative data. Design of AI, the podcast for product teams Hosted by Brittany Hobbs & Arpy Dragffy Guerrero. Find us on LinkedIn https://www.linkedin.com/company/designofai/Subscribe on Spotify, Apple, YouTube for weekly interviews with leaders at the forefront of AI.And join our substack newsletter to get resources, insights, and strategies for product teams This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

May 21, 202453 min

The secrets to researching potential emerging tech products

Building products using emerging technologies is more difficult. As we’re seeing with building AI products today, teams are often chasing which use case and customer profiles to focus on. It’s harder because the new technologies make us obsess over what’s possible rather than what people actually need. Dr. Llewyn Paine joins us to share lessons and strategies from her advising teams working on spatial computing, virtual reality, and robotics. Her expertise is helping teams make better product decisions through research. We’ll discuss how to identify your best potential customers and design higher-value products and services they’ll love to use. She is an innovation strategy consultant with nearly two decades of experience in emerging technologies, including mixed reality and AI at Microsoft, and experimental media for Disney. She has helped emerging technology teams launch flagship products and secure investments of over $300M. designofai.substack.com to get additional resources.Apple: Spotify: She’s speaking at the Designing with AI conference on June 4-5 where she’ll be diving into her most recent work: Protecting biometric data of research participants by leveraging AI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

May 15, 202449 min

Venture building: Taking AI product ideas from 0 to 1

There are so many new GenAI products coming to market that it is hard to believe even a fraction of them will become sustainable businesses. Ben Yoskovitz, Founding Partner of Highline Beta and author of Lean Analytics, joins us to discuss how many of these startups will fail to find a product-market fit. By rushing to get to market they’re likely skipping key steps that would typically improve their likelihood of success. We discuss the process his venture studio uses and where he sees opportunities for AI products to deliver more value to consumers.Ben’s newsletter: https://www.focusedchaos.co/Design of AI, the podcast for product teamsHosted by Brittany Hobbs & Arpy Dragffy Subscribe to the podcasthttps://www.youtube.com/@DesignofAI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

May 8, 202451 min

GenAI's copyright problem: Training & derivative copies

AI has the potential to be a transformational technology. But how is it trained and how can you track authenticity? Virginie Berger, Chief Business Development and Rights Officer at Matchtune, joins us to discuss the developments with copyright issues related to creative fields in hopes of shedding light on what this means for other industries. A particular issue is what happens to business models when you can get replicas elsewhere and have no clarity on how they were derived?We explore how product teams can and should adapt. Important is protecting the rights of your users and leveraging LLMs that are ethically processing the data that you input into them.Episode of hosted by Brittany Hobbs & Arpy Dragffy Guerrero. Please subscribe to the Design of AI, the podcast for product teams who want to leverage AI to transform their industries.Visit https://designof.ai to get AI news & tools that matter to product teams. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Apr 23, 202438 min

How AI is reshaping UX and the new role for designers

Emily Campbell joins us to discuss the future of UX. Her Shape of AI newsletter and community have become the go-to resource for AI product design patterns. She sees AI products getting to market with far less involvement from design than they should have. Design will undoubtedly experience shocks —with roles changing, and anti-patterns emerging— but also entirely new opportunities for design to shape adaptive experiences that offer users new capabilities to personally interact with products. We discuss what comes next after prompt-based, text interfaces. Episode of hosted by Brittany Hobbs & Arpy Dragffy Guerrero. Please subscribe to the Design of AI podcast. We speak to leaders at the forefront of AI to learn how great AI products are designed and how they’re transforming industriesTo contact us visit our website designof.ai This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Apr 16, 202448 min

The future of music in the era of generative AI

Maarten Walraven-Freeling, co-editor of MUSIC x and the co-CEO of Symphony Media joins the Design of AI podcast to discuss how AI will impact the music industry. We look at how digital streaming platforms and algorithmic discovery have already led to monumental changes to the business and what to expect now that generative AI tools, like Suno, are making music creation easier and more accessible. It is clear that music is one of the first and most important battleground where we see the potential of AI as a creative tool but also where concerns are growing about GenAI platforms being trained on content without the permission of copyright holders. The show is hosted by Brittany Hobbs & Arpy Dragffy GuerreroSubscribe on Spotify, Youtube, or Apple to get our latest episodesWe speak to leaders at the forefront of AI to learn how great AI products are designed and how they’re transforming industriesTo contact us visit our website designof.ai This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Apr 9, 202448 min

Designing AI products: Building effective products with LLMs

Peter Van Dijck, Founding Partner of AI agency Simply Put joins us to discuss how his team designs and builds AI products. Peter —formerly of Huge and Work & Co— share insights from how his background as an information architect and designer enable his team to see opportunities to discover and build the right product for orgs. We discuss the growing potential of LLMs to take on more use cases and the ways in which human-centred design inform decisions that need to be made. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Apr 1, 202448 min

How AI is changing ad agencies & the creative process

Ad agencies have always had to be ahead of their curve. They need to predict what clients need tomorrow. But AI has the potential to change everything about their workflows, business models, and value. We speak with JP Holecka, CEO of POWERSHIFTER, to find out how agencies will need to adapt. He's spent the last year training agencies on GenAI capabilities, as well as pushing the limits of the tools in his own projects. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit productimpactpod.substack.com

Mar 5, 202446 min