PLAY PODCASTS
Experiencing Data w/ Brian T. O’Neill

Experiencing Data w/ Brian T. O’Neill

Brian T. O’Neill from Designing for Analytics

103 episodesEN-US

Show overview

Experiencing Data w/ Brian T. O’Neill has been publishing since 2022, and across the 4 years since has built a catalogue of 103 episodes. That works out to roughly 70 hours of audio in total. Releases follow a fortnightly cadence.

Episodes typically run thirty-five to sixty minutes — most land between 35 min and 48 min — and the run-time is fairly consistent across the catalogue. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-US-language Technology show.

The show is actively publishing — the most recent episode landed 2 days ago, with 9 episodes already out so far this year. Published by Brian T. O’Neill from Designing for Analytics.

Episodes
103
Running
2022–2026 · 4y
Median length
43 min
Cadence
Fortnightly

From the publisher

Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be? While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for? I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI. If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI. Subscribe today on all major platforms or browse the episode archive. Get 1-Page Episode Summaries:https://designingforanalytics.com/experiencing-data-podcast/About the Host, Brian T. O'Neill:https://designingforanalytics.com/bio/

Latest Episodes

View all 103 episodes

194 - AI for BI: Juan Sequeda on Preparing Your Analytics to Work With LLMs

May 12, 202650 min

193 - Faster…or Better? Creating Value with Blue Ocean Thinking and AI-Powered Product Development

Apr 28, 202624 min

192 – Product Usage Does Not = Value: Why “Adoption” Metrics Are Misleading You

Apr 15, 202646 min

Ep 191191 - Turning Agents into Software that Sells [Smarter!] with Zig.ai CEO Steve Ancheta

I'm talking with Steve Ancheta, CEO of Zig, a platform designed to free sales teams from repetitive, non-revenue-generating tasks. CRM and logistical tasks can consume up to 72% of the week of a sales team, but Zig’s AI agents handle them so reps can focus on closing deals. Unlike tools built for managers, Zig follows a rep-first design—simple, intuitive, and aligned with the motivation to sell more—while also creating an intelligence layer that preserves institutional knowledge and accelerates onboarding for new hires. I wanted to chat with Steve about how he built a product that is both used—and worth paying for—with AI under the hood. Rather than relying on chat prompts, Zig surfaces prioritized tasks in panels and cards, integrates with CRMs and Slack, and builds confidence scores from user interactions. Because Steve comes from the world of sales—and that’s the domain his product sits in—I wanted to explore his “problem clarity” and share that with you, since I often find data and technical founders to be more solution-oriented and lacking in this area. Steve was an open book with me, and I’m hoping other founders trying to turn analytical complexity into commercial clarity can see how Steve is using AI and agents to make data work for end users—and worth paying for. Finally, I also challenge Steve to answer whether Zig.ai is a software company or a services company with a product behind the scenes—a question you might also ask yourself depending on your GTM model. Highlights/ Skip to: What is Zig.ai? (00:48) When managers see the value of a product but end-users don’t—and how product leaders need to react (5:20) What Zig’s UX is like and how it was designed (9:45) The sales process and risks salespeople face when demoing Zig (16:12) How Zig addressed their time-to-value challenge during the product experience (20:14) How Zig found a problem people were willing to pay to solve (24:16) We discuss whether an AI product company might be a services company with technology or a traditional software company (24:16) The Invisible Intelligence Gap Steve has observed within Zig’s business space (AI and analytics-powered sales tooling) (27:57) Why Steve isn’t worried about the major CRMs from building internal solutions to circumvent third-party tools like Zig (35:37) Steve Ancheta’s advice for trying to bring sophisticated data products to market (39:26)

Apr 1, 202642 min

Ep 190190 - Why Discovering Valuable Analytics Use Cases for Your Product Is So Hard (Even with AI)

I’ve seen this pattern repeatedly with teams building analytics and AI products: the issue usually isn’t the quality of the models or the sophistication of the data. The technology often works just fine. The real breakdown happens earlier—when teams begin with the data they already have and try to figure out what to build, instead of starting with the decisions their customers need to make. That approach often produces polished dashboards and compelling features that generate interest, but fail to drive real action. The missing piece is context. Decisions in the real world depend on incentives, habits, risk tolerance, and uncertainty—not just clean data. If your product doesn’t reflect that reality, it won’t meaningfully change behavior. Another common trap is assuming all available data is *evidence* worth surfacing. This “more is better” mindset leads to cluttered analytics tools that offload interpretation onto users. Even conversational AI interfaces can fall into this, encouraging open-ended exploration without helping users reach decisions. The analytics and AI products that succeed take a different approach. They’re designed around decision-making to reduce uncertainty, fit into real workflows, and guide users toward clear actions. In doing so, they bridge the gap between analytical capability and real-world value, making the product’s intelligence tangible, usable, and worth paying for. Highlights/ Skip to: The core mistake I see people making during the discovery process of building an insights product (2:07) Improve your product strategy by working ‘backwards” and understanding what decisions customers are trying to make (6:06) Insights don’t equal decisions in the real world (7:39) Designing with a goal of improving the lives of users in mind (11:17) Prototypes as a means of discovery (vs. product/solution validation) (13:48) The bias of data availability (20:39) Using AI and LLMs for discovery and product UX (24:17) Why AI-assisted analytics products should shape UX around making structured decisions (31:03) Overcoming the Invisible Intelligence Gap (34:57) Final thoughts (37:21) Links CED: My UX Framework for Designing Analytics Tools That Drive Decision Making https://designingforanalytics.com/ced Need my help finding the right use cases for your analytics or AI product? Book a complimentary 1x1 discovery call with me: https://designingforanalytics.com/contact/

Mar 17, 202643 min

Ep 189189 - The Invisible Intelligence Gap

I’ve worked with a lot of teams building analytics and insights products and decision-support systems. The pattern I keep seeing isn’t that the math is wrong or the ML / AI models are weak. Much of the time, the technology is fine. The challenge is that all that [not always artificial!] intelligence is not surfacing as value to your customer. Dashboards look impressive. AI features demo well. Pilots get strong reactions. And then… usage stalls. Sales cycles drag. Teams quietly revert to spreadsheets. Buyers, or rather, prospective buyers, say they “like the vision,” but deals don’t move into the “closed” stage. If your gut tells you the primary blocker is not your sales process, pricing/packaging, procurement, data quality, or risk/compliance, then you may be suffering from what I call the Invisible Intelligence Gap. Your product’s intelligence simply isn’t visible to them. Three forces tend to amplify this gap. First, the value translation gap, which is when buyers and users can’t easily connect insights to their own goals. Second is the workflow alignment gap resulting from the product not fitting how work actually gets done. Third, the trust and control gap involves users lacking confidence in how the system reaches conclusions. My frameworks like CED, FOWA, and MIRRR are designed to close these gaps by making value obvious, workflows smoother, and AI more trustworthy. Highlights/ Skip to: The challenge of insights not providing value to buyers, end-users, and stakeholders (3:20) How the invisible intelligence gap manifests itself (6:42) Common symptoms of the invisible intelligence gap (8:10) Examples of how changes in human behavior cause the gap (10:00) The (3) amplifiers of the invisible intelligence gap (11:47) The CED framework for addressing the intelligence gap problem (18:28) Addressing the invisible intelligence gap with FOWA (20:14) Using MIRRR to solve the invisible intelligence gap (21:25)

Mar 5, 202625 min

Ep 188188 - Can’t Close the Sale? Why Your Product’s UX and Workflow Misalignment Are Killing Sales (Part 2)

I’m continuing my exploration of a hard truth many leaders of analytics software companies run into: deals don’t stall because the tech is weak. Instead, they stall because prospects can’t see the value soon enough or the risk of changing the status quo is too high. This is often a product problem, not a sales one, and obtaining Flow-of-Work Alignment (FOWA) may help you start closing more evals and deals. So what is FOWA? The idea is simple, but demanding: stop showcasing features and start designing experiences that fit into how customers already do their work, create value, and add delight when your product is added into the loop. Getting to FOWA means tailoring demos with realistic, industry-specific data, reducing mental translation, and minimizing behavior change. In this scenario, improvements become small, testable bets tied to outcomes, not feature checklists. UX and usability are not cosmetic; they should shape trust, adoption, and buyability. When prospects can clearly see themselves succeeding with your product, value feels obvious, evals progress, and deals close. Highlights/ Skip to: Steps to implementing Flow-of-Work Alignment (FOWA): Tailor your demo or POC to map to the prospects' world and their workflow (1:53) Treat product improvements as bets that have to be tested so that observable outcomes are what you’re holding your product team accountable for (3:57) Reducing perceived behavior change (6:39) Realize that your product’s visual design are likely impacting your product’s clarity and its desirability (12:29) Aligning your sales and product teams around customer outcomes and not feature gaps (18:03) Why you might think FOWA won’t work for your product—and how to reframe those objections (24:22)

Feb 17, 202646 min

Ep 187187 - Can’t Close the Sale? The Invisible Reasons Prospects Aren’t Buying Your Technically Superior Analytics or AI Product (Part 1)

I’m digging into a frustrating reality many teams face: even technically superior analytics and AI products routinely lose deals—not because the KPIs or models aren’t good enough, but because buyers and users can’t clearly see how the product fits into their day-to-day work. Your demos and POCs may prove what’s possible, but long time-to-understanding, heavy thinking burden on the user, and required behavior or process changes introduce risk—and risk kills momentum. When value feels complicated, sales don’t move forward. Adding to the challenge is that many sales efforts focus almost entirely on the fiscal buyer while overlooking the end users who actually have to adopt the product to create outcomes. This buyer–user mismatch, combined with status quo bias, often leads to indecision rather than change. To address this, I explore the idea of thinking about the sales challenge as a product problem—and I introduce the idea of achieving Flow of Work Alignment (FOWA). The goal isn’t better persuasion—it’s clearer value. Strong FOWA means transitioning from demonstrating capabilities to helping customers see themselves—and their workflows—represented in your demos and POCs. The result? Prospects understand your value quickly, ask deeper, contextual questions, and deals move forward. Highlights/ Skip to: Data products must work harder to expose value clearly to avoid the dreaded “closed-lost” deal stage in your CRM (1:38) Making your data product’s value instantly obvious (5:18) How the “old model” of selling based on capabilities and feature demos can lead to lost sales (7:22) What Flow-of-Work Alignment is and how it can help you unlock deals (13:02) How to know if you have achieved FOWA or not in your product and sales process (13:58)

Feb 4, 202620 min

Ep 186186 - Why Powerful AI & Analytics Products Feel Useless to Buyers

I’m back! After about 7 years (or more) of bi-weekly publishing, I gave myself a break (to have the flu, in part), but now it’s back to business! In 2026, I’ll be focusing the podcast more on the commercial side of data products. This means more founders, CEOs, and product leader guests at small and mid-sized B2B software companies who are building technically impressive B2B analytics and AI products. With all the focus on AI, I want to focus on things that don’t change: what do value and outcomes look like to buyers and users, and how do we recreate it with analytics and AI? What learnings and changes have leaders had to make on the product and UI/UX side to get buyers to buy and users to use? So, that brings us to today’s episode. Today, I’ll explain why I think model quality, analytics data, and raw AI capability are quickly becoming commodities, shifting the real challenge to how effectively companies can translate their data and intelligence into value that buyers and users can clearly understand and defend. I dig into a core tension in B2B products: fiscal buyers and end users want different things. Buyers need confidence, risk reduction, and defensible ROI, while users care about making their daily work easier and safer. When products try to appeal broadly or force customers to figure out how AI fits into their workflows, adoption breaks down. Instead, I make the case for tightly scoped, workflow-aware solutions that make value obvious, deliver fast time-to-value, and support real decisions and actions. Highlights/ Skip to: Refocusing the trajectory of the show for 2026 (00:31) Turning your product’s intelligence into clear, actionable solutions so users can see the value without having to figure it out themselves (4:32) You’re selling capability, but buyers are buying relief from a specific pain point (7:33) Asking customers where AI fits into their workflow is poor design (16:57) Buyers and users both require proof of value, but in different ways (20:05) Why incomplete workflows kill trust (24:18) The importance of translating technical capability into something a human is willing to own (30:09)

Jan 20, 202638 min

Ep 185185 - Driving Healthcare Impact by Aligning Teams Around Outcomes with Bill Saltmarsh

Bill Saltmarsh joins me to discuss where a modern CDO gets the inspiration to “operate in the producty way” in his domain, which is healthcare. Now Vice President of Enterprise Data and Transformation and the Chief Data Officer at Children’s Mercy Kansas City, his early days as an analyst revealed a gap between what stakeholders asked for vs. the outcomes they sought. This convinced him that data teams need to pause, ask better questions, and prioritize meaningful outcomes over quickly churning out dashboards and reports. Bill and I discuss how a producty mindset can be embedded across an organization. He also talks about why data leaders must set firm expectations. We explore the personal and cultural shifts needed for analysts and data scientists to embrace design, facilitation, and deeper discovery, even when it initially seems to slow things down. We also examine how to define value and ROI in healthcare, where a data team's impact is often indirect. By tying data efforts to organizational OKRs and investing in governance, strong data foundations, and data literacy, he argues that analytics, data, and AI can drive better decisions, enhance patient care, and create durable organizational value. Highlights/ Skip to: What led Bill Saltmarsh to run his team at Children’s Mercy “the producty way” (1:42) The kinds of environments Bill worked in prior that influenced his current management philosophy (4:36) Why data teams shouldn’t be report factories (6:37) Setting the standard at the leadership level vs the everyday work (10:53) How Bill is skilling and hiring for non-technical skills (i.e. product, design, etc) (13:51) Patterns that data professionals go through to know if they’re guiding stakeholders correctly (20:54) The point when Bill has to think about the financial side of the hospital (26:30) How Bill thinks about measuring the data team’s contributions to the hospital’s success (30:28) Bill’s philosophy on generative AI (36:00) Links Bill Saltmarsh on LinkedIn

Dec 23, 202541 min

Ep 184184 - Part III: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change

In this final part of my three-episode series on accelerating sales and adoption in B2B analytics and AI products, I unpack a growing challenge in the age of generative AI: what to do when your product automates a major chunk of a user’s workflow only to reveal an entirely new problem right behind it. Building on Part I and Part II, I look at how AI often collapses the “front half” of a process, pushing the more complex, value-heavy work directly to users. This raises critical questions about product scope, market readiness, competitive risks, and whether you should expand your solution to tackle these newly surfaced problems or stay focused and validate what buyers will actually pay for. I also discuss why achieving customer delight—not mere satisfaction—is essential for earning trust, reducing churn, and creating the conditions where customers become engaged design partners. Finally, I highlight the common pitfalls of DIY product design and why intentional, validated UX work is so important, especially when AI is changing how work gets done faster than ever. Highlights/ Skip to: Finishing the journey: staying focused, delighting users, and intentional UX (00:35) AI solves problems—and can create new ones for your customers—now what? (2:17) Do AI products have to solve your customers’ downstream “tomorrow” problems too before they’ll pay? (6:24) Questions that reveal whether buyers will pay for expanded scope (6:45) UX outcomes: moving customers from satisfied to delighted before tackling new problems (8:11) How obtaining “delight” status in the customer’s mind creates trust, lock-in, and permission to build the next solution (9:54) Designing experiences with intention (not hope) as AI changes workflows (10:40) My “Ten Risks of DIY Product Design…” — why DIY UX often causes self-inflicted friction (11:46) Links Listen to part I: Episode 182 and part two: Episode 183 Read: “Ten Risks of DIY Product Design On Sales And Adoption Of B2B Data Products” Stop guessing what is blocking your own product’s adoption and sales: Schedule a Design-Eyes Assessment with me, and in 90 minutes, I'll diagnose whether you're facing a design problem, a product management gap, a positioning issue, or something else entirely. You'll walk away knowing exactly what's standing between your product and the traction you need—so you don't waste time and money on product design "improvements" that won't move your critical KPIs.

Dec 9, 202514 min

Ep 183183 - Part II: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change

In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done. To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. Highlights/ Skip to: Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11) Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04) Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve. People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment. And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality. What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work. Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor. And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience. __ Want people to pay for your product? Solve an *observable* problem—not a vague information or data problem. What do I mean? “When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones. Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem. If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around. Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why. Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort. And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory." __ One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product *changes*—or you’re making *improvements* that might make the product worth paying for at all, worth paying more for, or easier to buy. "It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meani

Nov 27, 202535 min

Ep 182182 - Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change

Building B2B analytics and AI tools that people will actually pay for and use is hard. The reality is, your product won’t deliver ROI if no one’s using it. That’s why first principles thinking says you have to solve the usage problem first. In this episode, I’ll explain why the key to user adoption is designing with the flow of work—building your solution around the natural workflows of your users to minimize the behavior changes you’re asking them to make. When users clearly see the value in your product, it becomes easier to sell and removes many product-related blockers along the way. We’ll explore how product design impacts sales, the difference between buyers and users in enterprise contexts, and why challenging the “data/AI-first” mindset is essential. I’ll also share practical ways to align features with user needs, reduce friction, and drive long-term adoption and impact. If you’re ready to move beyond the dashboard and start building products that truly fit the way people work, this episode is for you. Highlights/Skip to: The core argument: why solving for user adoption first helps demonstrate ROI and facilitate sales in B2B analytics and AI products (1:34) How showing the value to actual end users—not just buyers—makes it easier to sell your product (2:33) Why designing for outcomes instead of outputs (dashboards, etc) leads to better adoption and long-term product value (8:16) How to “see” beyond users’ surface-level feature requests and solutions so you can solve for the actual, unspoken need—leading to an indispensable product (10:23) Reframing feature requests as design-actionable problems (12:07) Solving for unspoken needs vs. customer-requested features and functions (15:51) Why “disruption” is the wrong approach for product development (21:19) Quotes: “Customers’ tolerance for poorly designed B2B software has decreased significantly over the last decade. People now expect enterprise tools to function as smoothly and intuitively as the consumer apps they use every day. Clunky software that slows down workflows is no longer acceptable, regardless of the data it provides. If your product frustrates users or requires extra effort to achieve results, adoption will suffer. Even the most powerful AI or analytics engine cannot compensate for a confusing or poorly structured interface. Enterprises now demand experiences that are seamless, efficient, and aligned with real workflows. This shift means that product design is no longer a secondary consideration; it is critical to commercial success. Founders and product leaders must prioritize usability, clarity, and delight in every interaction. Software that is difficult to use increases the risk of churn, lengthens sales cycles, and diminishes perceived value. Products must anticipate user needs and deliver solutions that integrate naturally into existing workflows. The companies that succeed are the ones that treat user experience as a strategic differentiator. Ignoring this trend creates friction, frustration, and missed opportunities for adoption and revenue growth. Design quality is now inseparable from product value and market competitiveness. The message is clear: if you want your product to be adopted, retain customers, and win in the market, UX must be central to your strategy.” — “No user really wants to ‘check a dashboard’ or use a feature for its own sake. Dashboards, charts, and tables are outputs, not solutions. What users care about is completing their tasks, solving their problems, and achieving meaningful results. Designing around workflows rather than features ensures your product is indispensable. A workflow-first approach maps your solution to the actual tasks users perform in the real world. When we understand the jobs users need to accomplish, we can build products that deliver real value and remove friction. Focusing solely on features or data can create bloated products that users ignore or struggle to use. Outputs are meaningless if they do not fit into the context of a user’s work. The key is to translate user needs into actionable workflows and design every element to support those flows. This approach reduces cognitive load, improves adoption, and ensures the product's ROI is realized. It also allows you to anticipate challenges and design solutions that make workflows smoother, faster, and more efficient. By centering design on actual tasks rather than arbitrary metrics, your product becomes a tool users can’t imagine living without. Workflow-focused design directly ties to measurable outcomes for both end users and buyers. It shifts the conversation from features to value, making adoption, satisfaction, and revenue more predictable.” — “Just because a product is built with AI or powerful data capabilities doesn’t mean anyone will adopt it. Long-term value comes from designing solutions that users cannot live without. It’s about creating experiences that take people from frustration to satisfaction to delight. Products must

Nov 10, 202522 min

Ep 181181 - Lessons Learned Designing Orion, Gravity’s AI, AI Analyst Product with CEO Lucas Thelosen (former Head of Product @ Google Data & AI Cloud)

On today's Promoted Episode of Experiencing Data, I’m talking with Lucas Thelosen, CEO of Gravity and creator of Orion, an AI analyst transforming how data teams work. Lucas was head of PS for Looker, and eventually became Head of Product for Google’s Data and AI Cloud prior to starting his own data product company. We dig into how his team built Orion, the challenge of keeping AI accurate and trustworthy when doing analytical work, and how they’re thinking about the balance of human control with automation when their product acts as a force multiplier for human analysts. In addition to talking about the product, we also talk about how Gravity arrived at specific enough use cases for this technology that a market would be willing to pay for, and how they’re thinking about pricing in today’s more “outcomes-based” environment. Incidentally, one thing I didn’t know when I first agreed to consider having Gravity and Lucas on my show was that Lucas has been a long-time proponent of data product management and operating with a product mindset. In this episode, he shares the “ah-hah” moment where things clicked for him around building data products in this manner. Lucas shares how pivotal this moment was for him, and how it helped accelerate his career from Looker to Google and now Gravity. If you’re leading a data team, you’re a forward-thinking CDO, or you’re interested in commercializing your own analytics/AI product, my chat with Lucas should inspire you! Highlights/ Skip to: Lucas’s breakthrough came when he embraced a data product management mindset (02:43) How Lucas thinks about Gravity as being the instrumentalists in an orchestra, conducted by the user (4:31) Finding product-market fit by solving for a common analytics pain point (8:11) Analytics product and dashboard adoption challenges: why dashboards die and thinking of analytics as changing the business gradually (22:25) What outcome-based pricing means for AI and analytics (32:08) The challenge of defining guardrails and ethics for AI-based analytics products [just in case somebody wants to “fudge the numbers”] (46:03) Lucas’ closing thoughts about what AI is unlocking for analysts and how to position your career for the future (48:35) Special Bonus for DPLC Community Members Are you a member of the Data Product Leadership Community? After our chat, I invited Lucas to come give a talk about his journey of moving from “data” to “product” and adopting a producty mindset for analytics and AI work. He was more than happy to oblige. Watch for this in late 2025/early 2026 on our monthly webinar and group discussion calendar. Note: today’s episode is one of my rare Promoted Episodes. Please help support the show by visiting Gravity’s links below: Quotes from Today’s Episode “The whole point of data and analytics is to help the business evolve. When your reports make people ask new questions, that’s a win. If the conversations today sound different than they did three months ago, it means you’ve done your job, you’ve helped move the business forward.” — Lucas “Accuracy is everything. The moment you lose trust, the business, the use case, it's all over. Earning that trust back takes a long time, so we made accuracy our number one design pillar from day one.” — Lucas “Language models have changed the game in terms of scale. Suddenly, we’re facing all these new kinds of problems, not just in AI, but in the old-school software sense too. Things like privacy, scalability, and figuring out who’s responsible.” — Brian “Most people building analytics products have never been analysts, and that’s a huge disadvantage. If data doesn’t drive action, you’ve missed the mark. That’s why so many dashboards die quickly.” — Lucas “Re: collecting feedback so you know if your UX is good: I generally agree that qualitative feedback is the best place to start, not analytics [on your analytics!] Especially in UX, analytics measure usage aspects of the product, not the subject human experience. Experience is a collection of feelings and perceptions about how something went.” — Brian Links Gravity: https://www.bygravity.com LinkedIn: https://www.linkedin.com/in/thelosen/ Email Lucas and team: [email protected]

Oct 28, 202550 min

Ep 180180 - From Data Professional to Data Product Manager: Mindset Shifts To Make

In this episode, I’m exploring the mindset shift data professionals need to make when moving into analytics and AI data product management. From how to ask the right questions to designing for meaningful adoption, I share four key ways to think more like a product manager, and less like a deliverables machine, so your data products earn applause instead of a shoulder shrug. Highlights/ Skip to: Why shift to analytics and AI data product management (00:34) From accuracy to impact and redefining success with AI and analytical data products (01:59) Key Idea 1: Moving from question asker (analyst) to problem seeker (product) (04:31) Key Idea 2: Designing change management into solutions; planning for adoption starts in the design phase (12:52) Key Idea 3: Creating tools so useful people can’t imagine working without them. (26:23) Key Idea 4: Solving for unarticulated needs vs. active needs (34:24) Quotes from Today’s Episode “Too many analytics teams are rewarded for accuracy instead of impact. Analysts give answers, and product people ask questions.The shift from analytics to product thinking isn’t about tools or frameworks, it’s about curiosity.It’s moving from ‘here’s what the data says’ to ‘what problem are we actually trying to solve, and for whom?’That’s where the real leverage is, in asking better questions, not just delivering faster answers.” “We often mistake usage for success.Adoption only matters if it’s meaningful adoption. A dashboard getting opened a hundred times doesn’t mean it’s valuable... it might just mean people can’t find what they need.Real success is when your users say, ‘I can’t imagine doing my job without this.’That’s the level of usefulness we should be designing for.” “The most valuable insights aren’t always the ones people ask for. Solving active problems is good, it’s necessary. But the big unlock happens when you start surfacing and solving latent problems, the ones people don’t think to ask for.Those are the moments when users say, ‘Oh wow, that changes everything.’That’s how data teams evolve from service providers to strategic partners.” “Here’s a simple but powerful shift for data teams: know who your real customer is. Most data teams think their customer is the stakeholder who requested the work… But the real customer is the end user whose life or decision should get better because of it. When you start designing for that person, not just the requester, everything changes: your priorities, your design, even what you choose to measure.” Links Need 1:1 help to navigate these questions and align your data product work to your career? Explore my new Cross-Company Group Coaching at designingforanalytics.com/groupcoaching For peer support: the Data Product Leadership Community where peers are experimenting with these approaches. designingforanalytics.com/community

Oct 14, 202545 min

Ep 179179 - Foundational UX principles for data and AI product managers

Content coming soon.

Sep 30, 202551 min

Ep 178178 - Designing Human-Friendly AI Tech in a World Moving Too Fast with Author and Speaker Kate O’Neill

In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions. Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps. Highlights/ Skip to: How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03) Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58) FOMO and the “Solution in Search of a Problem” problem in Data (5:18) Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21) Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09) How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57) Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54) Quotes from Today’s Episode "I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do." –Kate O’Neill " I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously." –Kate O’Neill "The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it." –Kate O’Neill “We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.” –Kate O’Neill "My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders. There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.” –Brian T. O’Neill Links KO Insights: https://www.koinsights.com/ LinkedIn for Ka

Sep 16, 202550 min

Ep 177177 - Designing Effective Commercial AI Data Products for the Cold Chain with the CEO of Paxafe

In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions. Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation. Highlights/ Skip to: Intro to Paxafe (2:13) How PAXAFE brings tons of cold chain data together in one user experience (2:33) Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42) The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14) Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23) How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57) Who the end users of PAXAFE are, and how the product team designs for these users (20:00) Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43) Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market (23:57) Quotes from Today’s Episode "Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time." –IIya "As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions." -IIya "With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.." -IIya "We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything." -IIya "If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..” -IIya "Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems." -IIya Links PAXAFE: https://www.paxafe.com/ LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/ LinkedIn for company: https://www.linkedin.com/company/paxafe/

Sep 3, 202549 min

Ep 176176 - (Part 2) The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications

This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up. In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers. Highlights / Skip to: Introducing the MIRRR UX Framework (1:08) Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31) Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17) Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35) Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12) Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain (12:07) Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28) Thinking about agentic AI as glue for existing applications and workflows, or as a worker (27:35) Quotes from Today’s Episode The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value." "In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it." "Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction." "Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen." "You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence." "You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work." "Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it." Quotes from Today’s Episode Part 1: The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications

Aug 19, 202529 min

Ep 175175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working. In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords. By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI. Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems: Monitor – enabling appropriate transparency into AI agent behavior and performance Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed …and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework. Highlights / Skip to: 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications. 01:27 The importance of trust in AI systems and how it is linked to user adoption 03:06 Cultural shifts, AI hype, and growing AI skepticism 04:13 Human centered design practices for agentic AI 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation 11:32 Measuring success of agentic applications with UX outcomes 15:26 Introducing the first two of five MIRRR framework control points: 16:29 M is for Monitor; understanding the agent’s “performance,” and the right level of transparency end users need, from individual tasks to aggregate views 20:29 I is for Interrupt; when and why users may need to stop the agent—and what happens next 28:02 Conclusion and next steps

Aug 6, 202528 min
© 2019 Designing for Analytics, LLC