
Elevate Your AIQ
117 episodes — Page 1 of 3
Ep 117: Cultivating Curiosity and Amplifying Human Knowledge with Bob Danna
Ep 116: Evolving the HR Function Through Agentic AI and Human Potential with Laura Maffucci
Ep 115: Humanizing the Hiring Experience and Earning Candidate Trust with Jerry Jao
Ep 114: Redesigning Work and Workforce Strategy for the Agentic Era with Paul Rubenstein

Ep 113: Leading with Business Strategy to Deliver Sustainable AI Value with Charlene Li
Charlene Li, analyst, author, and disruptive leadership expert, returns to Elevate Your AIQ to discuss with Bob her newly released book Winning with AI, co-authored with Dr. Katia Walsh. Charlene makes the case that most organizations are failing with AI because they treat it as a technology initiative rather than a strategic one — and lays out a 90-day, 12-step framework for building a foundation that creates real enterprise value. The conversation revisits themes from her Fall 2024 appearance, including responsible AI and the human-AI partnership, and explores how the landscape has evolved. Key topics include AI fluency as an organizational imperative, workforce reinvestment over workforce reduction, and the emerging concept of integrated intelligence — where human and AI capabilities combine to create something genuinely superhuman. Keywords Charlene Li, Winning with AI, Katia Walsh, AI strategy, AI fluency, AI literacy, integrated intelligence, superhuman worker, workforce planning, reskilling, pilot purgatory, responsible AI, ethical AI, governance, human centricity, talent transformation, future of work, organizational disruption, values-based AI, co-intelligence Takeaways Lead with business strategy, not AI technology — the question is never "what can we do with AI?" but "how can AI help us accomplish what we're already trying to do?" AI fluency, not just literacy, is the goal — fluency means reaching for AI naturally, trusting it, and using it to learn how to use it better, like chopsticks becoming second nature Organizations stuck in pilot purgatory are procrastinating real decisions — pilots give everyone an excuse not to commit, and that dooms projects from the start Successful examples show a better path: use AI to raise workforce quality first, then expand customer value, then reinvent the business entirely Reskilling requires both organizational imagination and honest values — the IKEA story turned 8,500 displaced service reps into a $1B design business Integrated intelligence combines AI's speed and scale with uniquely human traits — empathy, judgment, intuition, self-reflection, and wisdom — to create superhuman capability AI fluency in hiring is shifting from a red flag to a baseline expectation — how candidates use AI reveals curiosity, creativity, and adaptability far better than traditional interviews Responsible AI governance done right isn't a compliance burden — a gold-standard internal policy means regulation becomes a checkbox, not a crisis Quotes "You don't need an AI strategy — you already have a business strategy. Figure out what of your business strategy could really be impacted with AI." "Automating a broken process is the definition of madness. Because of AI, could we do this in a completely different way?" "AI can only be as creative as your questions are. It can only be as empathetic as you are." "We should stop doing pilots. It's just another way to procrastinate having to say yes or no." "The first thing they said was, we are not going to use AI to cut people. That is not the intent going in." "You aim for a higher level than any regulation would ever want. You go for the gold standard and whatever they ask of you, of course you do those things." Chapters 00:03 Welcome and guest introduction 01:27 Catching up since Fall 2024 and the impetus for Winning with AI 02:45 The 90-day framework and leading with business strategy 05:46 Reimagining work versus automating broken processes 09:22 AI fluency as an organizational imperative 14:06 Making AI practice habitual and learning in community 17:54 Embedding AI in the flow of work and escaping pilot purgatory 20:07 Workforce reinvestment and a recent case study 26:35 Reskilling, redeployment, and the IKEA story 29:54 Getting C-suite and boards to embrace a human-centric approach 33:38 Starting with customers and thinking beyond efficiency 38:30 Building AI fluency fast and making the investment 41:38 AI fluency in recruiting and hiring for AI capability 47:52 Integrated intelligence and the rise of the superhuman worker 50:42 From individual productivity to team and organizational impact 52:14 Values-based AI and imbuing organizational values into AI systems 55:53 Responsible and ethical AI as a strategic advantage 59:38 Goldilocks governance and the 90-day blueprint 01:00:21 Closing thoughts and book information Charlene Li: https://www.linkedin.com/in/charleneli “Winning With AI”: https://winningwithaibook.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 112: Architecting the Human-AI Partnership to Turn AI Strategy into Results with Oded Dubovsky
Bob Pulver reconnects with former IBM colleague Oded Dubovsky, founder of STRAIX (Strategy for AI Execution), an advisory practice helping organizations adopt AI thoughtfully and effectively. Oded shares a career journey spanning over two decades at IBM Research's Haifa Lab — where he led pioneering cognitive computing and computer vision projects — through applied AI work at Intel, and into independent consulting. The conversation explores why 95% of organizations struggle to move beyond AI aspiration to real execution, and what it takes to build a solid foundation before layering in AI. Bob and Oded also reflect on the enduring value of human ingenuity, originality, and orchestration in an increasingly AI-assisted world. Keywords Oded Dubovsky, STRAIX, AI strategy, AI execution, AI adoption, cognitive computing, computer vision, IBM Research, Haifa Lab, Watson, automation, generative AI, vibe coding, AI-assisted coding, responsible AI, human centricity, AI readiness, orchestration, innovation, shadow AI Takeaways Only about 5% of companies successfully adopt AI — most struggle with where to start, what tools to use, and how to build the right foundation before scaling AI is the "penthouse" built on top of decades of IT, software engineering, and automation experience — that foundational knowledge remains critical The human role is shifting from execution to orchestration and architecture — developers and knowledge workers are becoming "team leads" directing AI agents Responsible AI development means thinking through security, data, scalability, and governance from the start — not as an afterthought Slowing down to think carefully before prompting or building — echoing Einstein's 55/5 rule — leads to better, more scalable outcomes Early cognitive computing projects at IBM (food recognition, augmented reality for remote guidance) were ahead of their time, foreshadowing capabilities now taken for granted Human originality and the ability to generate truly novel ideas remain a distinctly human trait that AI has not replicated Quotes "AI is kind of the top level, like the penthouse on top of all of that." "95% are just saying we need AI — they kind of don't know how to absorb that, how to start using it." "Once I crossed the line, I couldn't go back." "Think about it — you just got a promotion. You're a team lead now. You don't micromanage. You give them the bigger picture." "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about the solution." — Einstein, as quoted by Oded "Slow down to speed up." Chapters 00:02 Welcome and introductions 01:04 Oded's background and career journey from IBM to Intel to STRAIX 08:08 Early cognitive computing at IBM — the Watson era and the "What Did I Eat?" project 13:01 From research to product — augmented reality, 3D cameras, and lessons learned 17:54 How AI adoption is accelerating and compressing what once took a decade 20:14 Why 95% of organizations struggle to execute on AI 24:54 How STRAIX works — mapping pain points, building a heat map, and guiding implementation 29:47 Automation tools, vibe coding, and the value of foundational experience 33:13 Human readiness and the mindset shift required to embrace AI 37:22 AI agents, social networks, and the human as orchestrator 44:20 Responsible AI development — building with guardrails from the start 51:26 Asking better questions and thinking architecturally before building 53:31 Closing thoughts and how to connect with Oded Oded Dubovsky: https://www.linkedin.com/in/odeddubovsky STRAIX: www.straix.biz For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 111: Building and Managing AI Agents to Shape the Future of Work with Jacob Bank
Bob Pulver sits down with Jacob Bank, Co-founder and CEO of Relay.app, whose career arc — from Stanford's Multi-Agent Systems Lab to founding Timeful (acquired by Google in 2015) to leading Gmail and Google Calendar product teams — represents one of the most continuous threads in AI agent development. Jacob frames AI agents not as software to configure, but as employees to hire, coach, and manage, arguing that great people managers are naturally suited to the AI era. He maps out a three-tier AI stack everyone should adopt and explores how knowledge work will be restructured, why AI literacy is non-negotiable, and how small businesses can now compete at scales once unimaginable. Keywords Jacob Bank, Relay.app, AI agents, agentic workflows, autonomous workers, workflow automation, small business, AI literacy, people management, Timeful, Google Calendar, Gmail, knowledge work, G&A, go-to-market, responsible AI, human-in-the-loop, SaaS evolution Takeaways The right mental model for AI agents is employee management: give them a job description, set expectations, provide feedback, and apply the same code of conduct as any team member Everyone needs three AI tools: a chatbot for conversation, a copilot for real-time task delegation, and an autonomous agent platform for proactive, repeatable work Relay runs on 9 humans and ~60 AI agents — and Jacob sees a path to serving 100x more customers with roughly the same team size AI levels the playing field for small businesses, enabling work at a scale previously only achievable by much larger organizations Jacob's three-level delegation progression: tasks you already do, tasks you're capable of but never have time for, and tasks you'd otherwise hire an expert for AI literacy is not optional — it's becoming a baseline requirement for effective work, equivalent to basic computer literacy Quotes "We're all managers now — that is the skill set we need." "If you have a job that is just to write the blog post about X, that job is not going to exist anymore." "It's not optional. This is going to be a requirement of being an effective worker in the future." "Whenever I have an AI agent doing a classification task, I always ask the AI to explain its rationale — because then you can correct it for next time." "At some point you'll cross this tipping point where you don't have to tell yourself to go use AI — it'll suck you in." Chapters 00:02 Welcome and introductions 00:56 Jacob's origin story and agent-oriented programming 02:54 From Timeful to Google 04:41 Pre-LLM AI features in Gmail and Calendar 06:15 AI coworkers vs. productivity tool nudges 07:39 Early agent research and org disruption 09:24 Restructuring knowledge work 11:45 Evolving human roles and AI literacy 13:32 The social complexity of scheduling 15:16 Credentialed jobs at risk 17:24 AI leveling the playing field for small business 18:17 Inside Relay — 9 humans and 60 agents 19:41 The three-tier AI stack 22:38 Relay as intelligent workflow automation 23:42 SaaS selection in the agent era 26:47 Platform consolidation and SaaS business models 28:13 Deploying agents across G&A, GTM, and R&D 33:16 Agent collaboration and human oversight 34:21 When to build vs. buy 37:56 Three levels of AI delegation 39:50 Scaling AI readiness across organizations 42:22 Responsible AI and the employee management lens 44:14 Evaluating agents vs. testing software 45:51 The blast radius problem 48:09 Bias, coachability, and correcting agents 49:29 Closing advice — go one step further 50:45 What's next for Relay Jacob Bank: https://www.linkedin.com/in/jacobbank https://relay.app For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.comSubstack: https://elevateyouraiq.substack.com

Ep 110: Rewiring Organizations for Human-Centric AI Transformation with Melissa Reeve
Bob Pulver and Melissa Reeve explore AI transformation and organizational design through the lens of Melissa's Hyperadaptive framework. They unpack what it means to become AI native, why most enterprises stumble by neglecting human support structures, and how governance, AI activation hubs, and AI leads create always-on learning organizations. The conversation tackles the reinvestment dilemma — what to do with capacity freed by AI — and makes the case for durable skills, systems thinking, and career lattices over ladders. Both Bob and Melissa draw on their non-linear careers and share the belief that humans remain essential connective tissue in any AI-powered future. Keywords Hyperadaptive, AI native, AI transformation, support structures, AI activation hubs, AI leads, dynamic governance, systems thinking, durable skills, adjacent competencies, agentic workflows, responsible AI, triple bottom line, career lattice, organizational design, value streams, Melissa Reeve, Elevate Your AIQ Takeaways Most AI transformations fail not because of technology, but because organizations underinvest in support structures — from AI councils to activation hubs to frontline AI leads Becoming AI native is a gradual five-stage journey: foundation, workflow integration, agentic AI, scaling agents, and full hyper-adaptivity The bifurcation problem is real: a small percentage self-direct their AI learning while the majority are left behind without programmatic support Individual productivity gains are a vanity metric — what matters is whether AI unlocks new organizational capabilities and a more ambitious mission The shift for workers is from doing the task to building, monitoring, and maintaining the AI that does it — durable skills like systems thinking are central to that transition Adjacent competencies unlocked by AI are where breakthrough innovation happens, especially at the intersection of previously siloed domains Responsible AI and the triple bottom line — people, profit, and planet — must be woven into AI native organizations from the start Quotes "A piano is easy to use — you can dink around on the keys all day, but it's not really easy to learn." "You can't get 21st century results with the 20th century operating system." "With great power comes great responsibility — and I don't think there's enough attention being put to the implications of AI." "The shift is from creating to building, monitoring, or maintaining — and there will always be room for the artisans." "AI changes who can do what — and that's where the innovation is, at the overlay of disciplines." Chapters 00:02 Welcome and introductions 01:17 Melissa's non-linear path and the origins of Hyperadaptive 03:49 Systems thinking, transferable skills, and shared career philosophies 05:13 Unpacking AI native and what it means for organizational design 07:53 Why large enterprises are struggling and the aircraft carrier analogy 09:21 AI maturity, readiness, and knowing where to draw the line 11:14 The biggest mistake: neglecting human support structures 13:57 AI activation hubs, AI leads, and dynamic governance 19:14 Centralized vs. functional governance layers 23:14 Where most organizations stand in early 2026 26:16 Individual productivity as a vanity metric 28:02 Unlocking organizational potential beyond current capabilities 31:22 Adjacent competencies, durable skills, and the future of careers 37:48 Systems thinking and redesigning work 40:05 Career lattices, value streams, and Unilever's talent model 43:10 AI governance, responsible AI, and the triple bottom line 50:48 Melissa's book release details Melissa Reeve: https://www.linkedin.com/in/melissamreeve Hyperadaptive Solutions: https://hyperadaptive.solutions For advisory and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 109: Championing Human Originality to Accelerate AI Transformation with Jonathan Aberman
Jonathan Aberman — venture capitalist, entrepreneur, educator, and CEO of Hupside — joins Bob Pulver to explore why AI readiness is fundamentally a human potential problem. Hupside's Original Intelligence Quotient (OIQ) provides an objective measurement of human originality relative to AI output, giving organizations a clear signal of who can thrive in an AI-augmented environment, who needs development, and how to compose teams for transformation. Jonathan and Bob dig into the dangerous feedback loop that AI can create when misused, and why originality is the true competitive differentiator. The conversation spans higher education, venture capital, workforce design, and the future of digital credentials, all through the lens of keeping humans central to value creation. Keywords Jonathan Aberman, Hupside, OIQ, Original Intelligence Quotient, AI readiness, human originality, talent transformation, workforce design, higher education, venture capital, AI augmentation, digital credentials, collective intelligence, responsible AI, human-AI symbiosis Takeaways Hupside's OIQ objectively measures human originality against AI output, helping organizations identify who to develop, elevate, or support through AI transformation AI creates a self-reinforcing feedback loop that debilitates when misused — but as a tool, it can powerfully accelerate human creativity Originality equals novelty plus salience; AI can generate novelty, but humans remain essential for determining what's meaningful Higher education's real challenge isn't cheating prevention — it's teaching students to reason well with AI, then measuring output quality Misaligning high-OIQ talent with constrained roles leaves value on the table; matching autonomy to originality profiles is a key workforce design opportunity The greatest long-term AI risk may be whether rising capability gradually excludes people from competing as knowledge workers OIQ and AIQ scores are dynamic and improvable — making them well-suited for portable digital credential profiles Quotes "AI has a couple of limitations that make it different from every tool humans ever invented — it creates a self-reinforcing loop that can cause debilitation if not used properly." "We're the umpire in a baseball game. We're not the players — you and your listeners are the players." "AI is not a cheating problem, it's an education problem." "Originality is novelty plus salience. As long as humans are the ones consuming, AI will always be at best a lieutenant." "The more we [flood] society with sameness, the more people who stand out are going to be important." "I'm not worried about whether AI becomes sentient. I'm more worried about whether it raises the bar and starts to exclude people." Chapters 00:02 Welcome and introductions 02:58 The founding of Hupside and the OIQ origin story 05:35 AI readiness as a human potential problem 07:53 OIQ in higher education and rethinking assessment 09:11 K-12 considerations and bias mitigation 11:20 VC and portfolio applications of OIQ 15:11 Embedding OIQ into the talent lifecycle 19:56 Autonomy, role design, and workforce orchestration 24:42 Higher education, authenticity, and the value of originality 27:04 Innovation management and organizational barriers to AI adoption 34:52 Short-termism, Silicon Valley monoculture, and pushing back 39:25 Can LLMs become truly original? Shared novelty vs. human originality 43:20 Collective intelligence and the wisdom of crowds 48:53 Digital credentials, OIQ in talent profiles, and data ownership 54:43 What's next for Hupside and closing thoughts Jonathan Aberman: https://www.linkedin.com/in/jonathanaberman Hupside: hupside.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.comSubstack: https://elevateyouraiq.substack.com

Ep 108: Disrupting Insurance While Designing and Building Responsibly with Juan Garcia
Juan Garcia, co-founder of Tuio, a fully digital insurance company based in Spain, joins Bob to discuss how Tuio is reimagining personal lines insurance for digitally-native consumers long underserved by traditional carriers. Juan shares how Tuio evolved its AI strategy from chasing operational efficiency to making smarter decisions across marketing, underwriting, and claims. Tuio built a proprietary AI claims agent that surfaces next-best-action recommendations with confidence scores, always with a human in the loop. The conversation also explores Tuio's grassroots approach to AI literacy, responsible design, and the organizational courage required to fundamentally rethink how a company works. Keywords Juan Garcia, Tuio, insurtech, digital insurance, personal lines, Spain, AI strategy, claims automation, Watson, human in the loop, AI literacy, responsible AI, subscription insurance, underwriting, organizational transformation, vertical AI, bottom-up innovation Takeaways Tuio identified a digitally-native consumer segment structurally unprofitable for traditional insurers and built a model around serving them through simplicity and transparency Most AI pilots focus on the wrong 10%: cost-to-serve efficiencies. Real value lies in improving decisions across marketing and claims, which represent ~85% of an insurer's cost base Watson processes multimodal inputs and generates next-best-action suggestions with confidence scores — routing complex ones to human reviewers Tuio never automates negative customer decisions — not just due to EU regulation, but because human empathy is irreplaceable in those moments By subsidizing any AI tools employees want to explore, Tuio unlocked bottom-up innovation — including a veterinarian who independently proto-built Watson's logic for pet health claims The real barrier to enterprise AI transformation is organizational courage: reworking processes and structures around AI requires strong leadership Quotes "AI is something that makes you rethink the way you do your whatever you do — and that's going to be different industry per industry, even company per company." "We switched from chasing cost-to-serve efficiencies to using AI to make better decisions — growing efficiently, underwriting smarter, and managing claims more effectively." "We will never automate negative decisions. If you start from the standpoint that your customers are your most valuable resource, you want to give them the most humane treatment you can." "If you don't give people these tools, you'll miss all the bottom-up ideas from the people actually in the trenches every day." "Even if you can build it, it doesn't mean you should. Just because AI can do something doesn't mean you should deploy it there." Chapters 00:02 Welcome and introductions 00:44 Juan's background: from telecom engineer to insurtech co-founder 03:31 Horizontal vs. vertical AI value — where the real opportunity lies 06:41 Tuio's target market and the underserved digitally-native consumer 12:54 Rethinking insurance: digital simplicity as competitive advantage 16:03 Tuio's AI evolution: from chatbot to decision intelligence 20:54 Watson: Tuio's AI claims agent and the shift to next-best-action 23:24 Human in the loop: why some decisions will never be automated 28:53 Building AI literacy through empowerment, not training mandates 32:52 Bottom-up innovation and the veterinarian who built Watson's prototype 40:31 AI readiness, responsible design, and knowing what not to build 45:15 Organizational courage and why AI transformation is harder than those before it 53:30 Closing reflections and what's next for Tuio Juan Garcia: https://www.linkedin.com/in/juanga2/ Tuio: https://tuio.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 107: Measuring AI Maturity, ROI, and Organizational Impact with Russ Fradin
Bob Pulver sits down with Russ Fradin, Founder and CEO of Larridin, to explore what it really takes for organizations to move from AI experimentation to measurable impact. They unpack the tension between AI excitement and enterprise reality, focusing on ROI, workforce readiness, responsible adoption, and the cultural shifts required to unlock productivity gains. Russ outlines why measurement and visibility are the missing pieces in most AI strategies and makes the case that high-agency professionals who embrace AI will shape the future of work. The conversation reframes AI not as a job eliminator, but as a force multiplier—if leaders build the right scaffolding to support their people. Keywords Russ Fradin, Larridin, AI ROI, AI readiness, AI maturity, workforce transformation, CIO strategy, CHRO strategy, CFO decision-making, productivity measurement, high-agency professionals, AI adoption, responsible AI, enterprise AI, organizational change Takeaways AI adoption without measurement leads to experimentation without accountability. CIOs, CFOs, and CHROs need visibility into what tools are actually being used—and whether they drive real productivity. The future of knowledge work is humans working with AI tools alongside agents. High-agency professionals who embrace AI will dramatically amplify their output and career trajectory. Organizations must move beyond individual productivity metrics toward team and enterprise-level effectiveness. Responsible AI adoption requires training, policy scaffolding, and clarity around secure, enterprise-grade usage. Companies that reinvest AI-driven productivity into growth will outperform those focused solely on short-term margin gains. Quotes “You can’t possibly understand the ROI of these tools without understanding what’s being used in your organization.” “Having great technology is necessary, but not sufficient to drive change.” “The future of work is humans using AI tools, working alongside agents.” “There’s no such thing as a knowledge worker five years from today who isn’t using AI in some part of their job.” “We’re effectively redefining what it takes to succeed in a lot of these roles—in real time.” “The companies that don’t partner with their employees on this transformation will get left behind.” Chapters 00:02 Welcome and Introduction 00:31 Russ’s Background and the Vision Behind Larridin 01:32 Why AI Is a Generational Technology Shift 03:34 The Measurement Gap in Enterprise AI Adoption 06:17 Workforce Anxiety and AI Upskilling 10:33 The ROI Question and Productivity Metrics 15:10 Global Talent, Competition, and AI Parallels 20:17 Responsible AI and Security Considerations 26:20 Building the Scaffolding for Adoption 30:48 Understanding What “Great” Looks Like 34:55 Who Captures the Productivity Gains? 40:22 The High-Agency Advantage in the AI Era 46:09 Why Smart Companies Invest in Their People 52:04 What’s Next for Larridin 53:09 Closing Remarks Russ Fradin: https://www.linkedin.com/in/rfradin Larridin: https://larridin.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 106: Activating Network Intelligence to Unlock Strategic Opportunities with Stephen Messer
Bob Pulver is joined by Stephen Messer, serial entrepreneur and co-founder of Collective[i] and Intelligence.com, to explore how collective intelligence, social analytics, and contextual AI are reshaping how business gets done. Stephen challenges the limitations of traditional SaaS and language models, arguing that true AI value comes from modeling real-world systems — especially how trust, relationships, and buying decisions actually unfold. The conversation dives into economic foundation models, the hidden power of relationship graphs, and why activating trusted networks may be the missing link in sales, hiring, and enterprise decision-making. Together, they unpack how removing friction and restoring context can unlock warp-speed productivity and more human-centered outcomes. Keywords Stephen Messer, Collective[i], Intelligence.com, collective intelligence, economic foundation model, relationship graphs, trust networks, contextual AI, sales productivity, forecasting, CRM transformation, go-to-market strategy, weak ties, network intelligence, AI agents, decision-making Takeaways Collective intelligence enables AI to model real-world business systems, not just generate language or automate workflows. Context — including relationships, timing, incentives, and market conditions — is the missing ingredient in most AI-driven decision-making. Traditional SaaS stacks create “silos of intelligence,” limiting visibility and reducing the effectiveness of AI tools layered on top. Relationship graphs built from verified interactions unlock faster, higher-trust introductions and better business outcomes. Trust acts as an accelerator in commerce, reducing friction and enabling decisions at “warp speed.” Economic foundation models can forecast deal outcomes and market shifts by observing patterns across organizations. AI should remove internal friction so humans can focus on value creation, not administrative workflows. The future of work depends on combining contextual intelligence with trusted human networks. Quotes “To the man with a hammer, the world looks like a nail.” “You’re not modeling words — you’re modeling a system.” “If I don’t understand the context, I can’t understand the outcome.” “Trust enables transactions at warp speed.” “Most AI today is predicting the next best word — not the next best decision.” “The friction to leverage your own network is far too high.” Chapters 00:01 Introduction and Stephen’s Entrepreneurial Journey 00:40 Founding Collective[i] and the Vision Behind It 02:22 Replacing the Traditional Sales Stack with Contextual AI 05:46 Why Context Matters More Than Prompt Engineering 09:18 Systems of Record vs. Systems of Understanding 16:01 The Limits of LinkedIn and Relationship Context 23:24 Introducing Intelligence.com and Verified Networks 36:39 The Origins of Collective Intelligence and Economic Modeling 48:20 Trust Networks, Hiring, and Weak Ties 55:52 Forecast Series and the Power of Long-Form Dialogue 1:00:58 Closing Thoughts and What’s Next Stephen Messer: https://www.linkedin.com/in/stephenmesser Collective[i]: https://collectivei.com/ Intelligence.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 105: Transforming High-Volume Hiring for Greater Efficiency and Effectiveness with Dave Vu
Bob sits down with Dave Vu, Co-founder of Ribbon, to explore how AI is reshaping high-volume hiring and the candidate experience. Drawing on his background in recruiting, venture capital, and scaling AI startups, Dave shares why the hiring funnel is breaking under application volume—and how AI interviews can help close the gap. They discuss human-in-the-loop design, responsible AI, regulatory trends, bias mitigation, and why transparency and feedback are critical to building trust in the future of work. Keywords Dave Vu, Ribbon.ai, AI interviews, high-volume hiring, candidate experience, responsible AI, human-in-the-loop, talent acquisition, hiring automation, bias mitigation, AI regulation, recruiter efficiency, quality of hire, generative AI Takeaways Application volume has grown exponentially while recruiter headcount has remained relatively flat, creating a widening efficiency gap. AI interviews can reduce screening time by 50% or more while improving consistency and fairness. Candidate experience improves when applicants receive timely engagement, flexibility, and meaningful feedback. Human-in-the-loop design ensures AI handles repetitive tasks while recruiters retain decision-making authority. Transparency about AI usage builds trust and increases candidate adoption. Regulatory clarity will accelerate enterprise adoption of AI in hiring. Responsible AI implementation requires balancing innovation with bias mitigation and compliance guardrails. Generative AI advancements are reshaping not only hiring, but content creation and digital trust more broadly. Quotes “Our long-term mission is to hire within 24 hours and make hiring faster and fairer.” “Human-centricity doesn’t equate to anti-automation.” “The recruiter and hiring manager are always in the driver’s seat.” “It’s not about replacing humans—it’s about amplifying their capacity.” “Great candidate experience comes down to respect for their time.” “Regulations create certainty—and certainty accelerates adoption.” Chapters 00:02 Introduction and Dave’s career journey in talent 02:55 Scaling an AI startup and identifying hiring challenges 05:02 The high-volume hiring problem and Ribbon’s mission 10:40 Designing a better candidate experience with AI 15:16 Rethinking resumes and screening inefficiencies 22:41 Human-in-the-loop and responsible AI principles 24:40 Regulation, transparency, and enterprise adoption 28:57 Candidate acceptance and AI interview adoption trends 34:28 Integration with ATS platforms and workflow evolution 43:01 Personal reflections on generative AI and digital trust 49:29 AI literacy, workforce disruption, and the future of hiring Dave Vu: https://www.linkedin.com/in/dave-vu Ribbon: https://ribbon.ai For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

S1 Ep 104Ep 104: Sustaining Human Performance and Wellbeing in an AI Era with Tim Borys
Bob Pulver is joined by Tim Borys, a leader who wears many hats across executive coaching, workplace wellbeing, entrepreneurship, and podcasting. Drawing on Tim’s journey from elite athletics to advising leaders and organizations, the conversation explores sustainable human performance, burnout, adaptability, and leadership in times of constant change. Together, Bob and Tim examine why human-centric thinking is more critical than ever as AI reshapes work—and how individuals and organizations can thrive without losing sight of wellbeing, purpose, and agency. Keywords Tim Borys, Fresh Group, workplace wellbeing, human performance, burnout, executive coaching, leadership, adaptability, AI and work, human-centric AI, WRKdefined Podcast Network, Elevate Your AIQ Takeaways Sustainable performance requires focusing on human fundamentals like rest, recovery, and mindset High-performing corporate cultures often neglect wellbeing until burnout occurs Adaptability and learning are the most critical skills for thriving amid AI-driven change Leadership and communication skills will be essential for managing both people and AI agents Human performance, leadership, and business strategy must be addressed together AI should augment—not replace—human agency and critical thinking Quotes “Corporate high performers seem to think the rules of human performance don’t apply to them.” “Work sucks for a lot of people—and it doesn’t have to.” “Every human has a human operating system, and most people never optimize it.” “Adaptability is the number one human skill for thriving.” “As technology becomes more powerful, the human side matters even more.” Chapters 00:02 Welcome and introduction 00:43 Tim’s journey from elite athletics to executive coaching 02:39 Applying human performance principles to corporate work 04:32 Burnout, sleep, and sustainable performance 07:22 Human potential and wellbeing at work 09:05 The human operating system 12:06 Human-centric AI and the cost of efficiency 14:12 Adaptability, learning, and future skills 18:06 Fear, uncertainty, and career resilience 23:10 Leadership skills for managing AI agents 29:49 Performance-managing AI and responsible use 36:29 Frontline leaders vs. executive perspectives 43:52 Mindset, perception, and human agency 47:27 Personal AI tools and experimentation 51:30 The Working Well podcast and closing Tim Borys: https://timborys.com/ Working Well podcast: https://wrkdefined.com/podcast/the-working-well-podcast For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 103: Modernizing the Hospitality Experience to Exceed Expectations with Lance Thompson
Bob Pulver welcomes Lance Thompson, President of VIVI, a hospitality-focused AI company formerly known as SAVI. Lance shares his journey from luxury hospitality to tech entrepreneurship, highlighting how VIVI is bringing human-centered design to voice AI. They discuss the evolution of guest experiences, the importance of multilingual support, and how AI is being responsibly deployed to reduce friction for both guests and staff. From room service to HR to golf tee times, VIVI’s solutions demonstrate what happens when deep hospitality know-how meets cutting-edge AI. Keywords Lance Thompson, VIVI, SAVI, hospitality tech, voice AI, multilingual support, hotel operations, HR automation, guest experience, AI adoption, Microsoft Azure, Kinetic Solutions Group, Four Seasons, Vail Resorts, Aspen Hospitality, AI in travel, shadow AI, responsible AI, agentic search, reservations automation, guest personalization Takeaways Lance's career spans luxury hospitality, including Four Seasons and Vail Resorts, before shifting into tech with the founding of SAVI, now VIVI VIVI is leveraging AI voice agents to support hotel operations, from answering phones to making reservations and handling HR inquiries Multilingual capabilities are critical in hospitality; VIVI agents can fluently switch between languages in real time Lance emphasizes the importance of consistency in service delivery — AI can ensure high-quality, brand-aligned experiences across time zones and locations Unlike traditional decision-tree systems, VIVI’s tools rely on conversational AI that listens, adapts, and can be interrupted mid-sentence Shadow AI poses risks for companies — Lance urges leaders to develop clear internal policies for responsible use and governance VIVI's architecture is designed with data privacy and security in mind, with each client having its own isolated knowledge base The future of hospitality AI lies in scalable, personalized tools that blend human empathy with machine precision Quotes “I wanted to be in a space where I could help people have a better experience in life — and hospitality gave me that.” “If it can’t be interrupted, it’s not a conversation. And that’s what real guest service is about.” “We don’t want to replace Janet in Reservations — we want to scale her.” “Guests don’t want a link. They want an answer — fast, accurate, and in their language.” “People aren’t afraid of AI. They’re asking when they can start using it to be more effective at their jobs.” “We’re not building a static product. As the models improve, our tools do too.” Chapters 00:00 - Intro and background from Carmel to Colorado 02:47 - Lance’s early passion for hospitality 05:09 - Discovering the limits of legacy systems 07:10 - The spark behind founding SAVI (now VIVI) 08:48 - Early demos, use cases, and multilingual potential 11:36 - Why real conversational AI matters 14:59 - Shadow AI and responsible adoption 17:54 - Building secure, client-specific AI agents 23:33 - Creating community through consistent service 26:39 - Managing real-time updates and seasonal accuracy 29:39 - Rethinking apps and improving discoverability 32:19 - The magic of humanlike conversations 36:02 - Delivering 5-star experiences through AI 39:30 - Personalizing brand voice (yes, even “absolutely”) 41:09 - Customizing user experience in real-time 43:03 - Transparency, trust, and guest empowerment 46:25 - What’s next for VIVI and hospitality AI 48:00 - Expanding into HR, golf, and reconciliation tools 51:06 - The travel planning use case 53:19 - New challenges in AI-driven SEO 53:23 - Final reflections and what’s ahead Lance Thompson: https://www.linkedin.com/in/lance-thompson-92a5476 VIVI: http://www.vivi.bot/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com
Ep 102: Enabling an Intelligent, Efficient, and Human-Centered Hiring Experience with Adam Gordon
In this insightful and forward-looking conversation, Bob Pulver speaks with Adam Gordon, co-founder and CEO of Poetry, about the rise of hiring enablement and how AI can be used to create consistency, speed, and scalability in talent acquisition. Adam reflects on his entrepreneurial journey from Candidate.ID to Poetry, unpacks the MOLT framework (Marketing, Operations, Learning, Tools), and explains how Poetry integrates AI to support recruiters and hiring managers with streamlined processes and guardrails to ensure quality and compliance. They also explore deeper workforce challenges like trust, burnout, and AI’s societal impact—especially in the context of shrinking employee tenure and the future of work. Keywords Adam Gordon, Poetry, hiring enablement, recruiter enablement, AI agents, MOLT framework, Candidate.ID, talent acquisition, recruiter productivity, ATS integration, AI guardrails, employer brand, candidate experience, AI governance, trust in leadership, DEI, burnout, workforce automation, staffing industry, responsible AI, talent intelligence Takeaways Adam Gordon’s journey from recruiting to tech entrepreneurship has been shaped by the need to empower recruiters with better tools and processes. Poetry was created as a hiring enablement workspace to reduce reliance on fragmented point solutions and to streamline recruiter workflows. The MOLT framework (Marketing, Operations, Learning, Tools) organizes recruiter needs in a way that supports end-to-end hiring activity. Poetry emphasizes product design simplicity and consistency, integrating AI without exposing users to the risks of hallucination or inconsistent prompts. Recruiters using Poetry can save up to 25% of their time per day, but there's concern about how organizations reinvest those gains. Guardrails are built into Poetry to ensure a consistent employer brand, tone, and candidate experience—especially important given drops in organizational trust. The move from “recruiter enablement” to “hiring enablement” reflects how recruiters and hiring managers must work together in today’s TA ecosystems. A new Poetry workspace tailored for staffing companies is set to launch in Q2 2026, signaling the platform’s evolution and market expansion. Quotes “Recruiting is a team sport.” “We’ve put such strong guardrails in place, it’s not possible for Poetry to hallucinate.” “We wanted to eliminate recruiters having to log into 30 different tools to do their job.” “I’ve described it as an age of employment brutality—CEOs don’t want more people on payroll.” “The trust barometer is dropping, and without trust, the candidate experience and employer brand collapse.” “Just because you can build something doesn’t mean you’ve built a technology company.” Chapters 00:00 - Introduction and Adam’s Background 01:17 - From Social Media Search to Candidate.ID 05:32 - The Vision Behind Poetry 07:27 - Simplicity, Product Design, and AI Agents 09:16 - MOLT: Marketing, Operations, Learning, Tools 11:16 - ATS Integration and 25% Time Savings 14:05 - The Reinvestment Dilemma 18:34 - Talent Intelligence and Bite-Sized Research 22:01 - Guardrails Over Free Prompting 24:51 - Mitigating Risk and Ensuring Consistency 29:58 - From Recruiter to Hiring Enablement 33:40 - Empowering Employer Brand and Talent Attraction 37:50 - The Importance of Trust and Communication 43:25 - Turnover, Tenure, and the Workforce Equation 49:22 - Responsible AI and Societal Impact 54:35 - Creative AI Tools and Industry Disruption 56:44 - Building a Scalable Tech Company 59:46 - 2026 Preview: Poetry for Staffing Companies Adam Gordon: https://www.linkedin.com/in/adamwgordon/ Poetry: https://www.poetryhr.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 101: Reshaping the Workforce Through Sensemaking and Trusted Talent Intelligence with Vijay Swami
Bob Pulver talks with Vijay Swami, Co-Founder and CEO of Draup, a global leader in AI-powered talent intelligence. Vijay shares his journey from early roles in call center forecasting to founding a management consultancy and then TalentNeuron, later acquired by CEB. With deep roots in data science and a vision for empowering internal analytics teams, Vijay built Draup to tackle labor market complexity using advanced AI, unstructured data, and rich taxonomies. Vijay and Bob discuss building trusted, AI-powered talent intelligence platforms that bridge data complexity and business decision-making, and how human-centric, explainable AI is reshaping strategic workforce planning. They cover the growing importance of verification skills, ethical AI practices, the future of people analytics, the architecture of trusted and explainable AI systems, and the evolving role of humans and agents in enterprise workflows. Keywords Vijay Swami, Draup, AI in HR, People Analytics, Strategic Workforce Planning, verification skills, ethical AI, talent intelligence, agentic AI, skills-based hiring, cloud data, explainability, trust, synthetic data, digital twins, ETTER, Curie, job displacement, augmented intelligence, transparency Takeaways AI's value in HR lies in sense-making from complex and unstructured data, not just simplifying workflows. Verification skills—like content and narrative validation—are emerging as critical in a world flooded with AI-generated data. Draup’s AI agent Curie supports HR and analytics professionals with leadership-ready narratives and scenario planning. The platform's ETTER model goes beyond job descriptions to assess real work through contracts, SLAs, and KPIs. Transparency and traceability are foundational to building trust in AI systems; Draup compares its models against industry benchmarks. Ethical AI practices include open documentation, interpretability, and empowering analysts to correct or clarify information. AI should not be viewed solely as a job killer; clear, specific skills definitions in job postings can increase hiring and help target investments. True transformation requires shifting from jobs to workflows and task orchestration, blending human effort, AI agents, and automation. Quotes “We want to tell the story—not just show the data—to help people analytics become a leadership engine.” “Verification skills are the next battery of capabilities organizations must build for a trustworthy enterprise.” “Transparency is about giving customers the right to know—even if they don’t ask.” “HR has the opportunity to become heroes in this AI wave by unlocking the true nature of work.” “We should be therapists for data anxiety—helping organizations see what’s real versus what’s a myth.” “I’m a net AI job creator guy—because there’s no shortage of work, just a need to match skills and workflows more intelligently.” Chapters 00:05 - Introduction and Vijay’s background 00:57 - From forecasting analyst to AI-powered platforms 03:18 - Rethinking labor intelligence beyond job descriptions 05:39 - Building a sense-making engine from complex data 07:42 - Storytelling, context, and executive alignment 11:15 - The rise of verification skills 14:04 - Creating a trusted and transparent AI ecosystem 19:31 - Unlocking the true nature of work through ETTER 22:44 - Ethical AI and human-centric design 32:19 - How data becomes a therapeutic tool 35:14 - AI’s real impact on jobs and skills demand 45:25 - Strategic work planning beyond job roles 49:19 - Optimism, augmentation, and future-proofing teams 50:34 - Closing thoughts and appreciation Vijay Swami: https://www.linkedin.com/in/vijay-swaminathan-a44101/ Draup: https://draup.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 100: Pulverizing the Journey to Human-Centric AI Readiness with Bob Pulver
In this milestone 100th episode, host Bob Pulver reflects on the journey of Elevate Your AIQ, sharing why he started the podcast, what he's learned from nearly 100 conversations, and what’s ahead for the show and its community. He revisits recurring themes such as AI literacy, responsible innovation, and human-centric transformation—connecting them to his personal experiences, professional background, and passion for empowering others. This solo conversation is both a look back and a call to action for individuals and organizations to embrace AI thoughtfully and elevate their AIQ together. Keywords AIQ, AI literacy, responsible AI, human-centric design, talent transformation, skills-based hiring, human potential, CHRO of the future, work redesign, education reform, podcasting, Substack, transformation leaders, automation strategy, AI readiness, AI ethics, trust, transparency, fairness, lifelong learning, community, AI-powered workforce Takeaways Podcasting is a powerful outlet for exploring curiosity, storytelling, and continuous learning—especially for neurodivergent thinkers. Human-centric AI readiness is not just about tools or tech—it’s about mindset, adaptability, and lifelong learning. AIQ exists on three levels: individual, team, and organizational—each requiring a blend of skills, tools, and ethical judgment. Responsible AI is central to modern transformation—touching on transparency, fairness, ethics, and explainability. CHROs and people leaders have dual responsibilities as strategic architects of work and catalysts for responsible innovation. Hiring for skills and potential—rather than pedigree—is crucial to unlocking hidden talent and countering bias. Education and talent development must evolve to equip students and workers with the durable skills of the AI-powered future. Communities of practice and peer generosity are vital to collective learning and resilience in this era of rapid change. Quotes “Use AI where you should, not wherever you can.” “We’ve always adapted to new technologies—this time is no different.” “Human-centricity and human potential are key overarching themes of this show, and of the future of work.” “AIQ isn’t just about literacy—it’s about readiness, judgment, and mindset.” “If you are a DEI advocate, you are now a responsible AI advocate.” “You can control your own destiny—you’re capable of more than you think.” Chapters 00:00 Welcome and Gratitude for Episode 100 00:50 Human-Centric AI and the Purpose of the Show 02:32 Authenticity, Creativity, and Focus 04:35 My Background: Corporate to Independent 07:18 Early Exposure to AI at IBM and Personal Stakes 09:55 Start with Processes and Business Challenges, Not Tech 11:48 Three Levels of AIQ: Individual, Team, Org 13:45 Beyond Prompting: Augmenting Capabilities 15:20 Responsible AI: Use and Design 17:30 The Role of Trust, Transparency, and Fairness 19:50 DEI and Responsible AI Are Inseparable 21:10 Skills-Based Hiring and Hidden Potential 23:00 Designing Work for Human + AI Partnership 25:40 Lifelong Learning and the Future of Education 27:20 CHROs as Architects and Innovation Catalysts 29:30 Offense and Defense in Responsible Innovation 31:00 A Call to Action for Listeners and the Community 32:10 What’s Next: Live Shows, Events, Writing, and Community 33:20 Closing Gratitude and Future Outlook For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 99: Advancing Human-Centered AI and Collaborative Intelligence with Ross Dawson
Bob Pulver sits down with Ross Dawson, world-renowned futurist, serial entrepreneur, and creator of the Humans + AI community. With decades of foresight expertise, Ross shares his evolving vision of human-AI collaboration — from systems-level transformation to individual cognitive augmentation. The conversation explores why organizations must reframe their approach to talent, capability, and value creation in the age of AI, and how human agency, trust, and fluid talent models will define the future of work. Keywords Ross Dawson, Humans + AI, AI roadmap, ThoughtWeaver, AI teaming, digital twins, augmented thinking, talent marketplaces, future of work, systems thinking, AI in organizations, AI in education, trust in AI, AI-enabled teams, cognitive diversity, latent talent, fluid talent, organizational design Takeaways The “Humans + AI” framework centers on complementarity, not substitution — AI should augment and elevate human potential. AI maturity is not just technical — it requires cultural readiness, mindset shifts, and systems-level thinking. Trust in AI must be calibrated; both over-trusting and under-trusting limit value creation. AI-enabled teams will rely on clear role design, thoughtful delegation of decision rights, and frameworks for collaborative intelligence. Digital twins and AI agents offer different organizational advantages — one mimics individuals, the other scales domain expertise. Organizations must reimagine work as networks of capabilities, not boxes of job descriptions. Talent marketplaces are an early expression of fluid workforce models but require intentional design and leadership buy-in. The most human-centric organizations will be best positioned to attract talent and thrive in the AI era. Quotes “AI should always be a complement to humans — not a substitute.” “We live in a humans + AI world already. The question is how we shape it.” “Mindset really frames how much value we can get from AI — individually and societally.” “You know more than you can tell. That gap between tacit knowledge and what AI can access is where humans still shine.” “Start with a vision — not a headcount reduction. Ask what kind of organization you want to become.” “We can use AI not just to apply existing capabilities but to uncover and expand them.” Chapters 00:00 - Welcome and Ross Dawson’s introduction 01:10 - From futurism to Humans + AI: key focus areas 03:30 - How AI is shifting public curiosity and mindset 06:00 - Systems-level thinking and responsible AI use 08:20 - AI in education and enterprise transformation 11:10 - The rise of AI-augmented thinking 14:00 - Calibrating trust in AI and human roles in teams 17:00 - Designing humans + AI teaming frameworks 20:30 - Delegation models and decision architecture 23:20 - Digital twins vs synthetic AI agents 26:00 - The value of tacit knowledge and cognitive diversity 30:00 - Empowering individuals amidst career uncertainty 32:10 - Breaking out of job “boxes” with fluid talent models 35:00 - Talent marketplaces and barriers to adoption 38:00 - Human-centric leadership in AI-powered transformation 41:00 - Strategic roadmaps and vision-led change 45:30 - Ross’s personal AI tools and experiments 52:00 - Final thoughts on AI’s role in augmenting human creativity Ross Dawson: https://www.linkedin.com/in/futuristkeynotespeaker Humans + AI: https://humansplus.ai For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 98: Empowering an AI-Ready Generation to Learn, Create, and Lead with Jeff Riley
Bob Pulver speaks with Jeff Riley, former Massachusetts Commissioner of Education and Executive Director of Day of AI, a nonprofit launched out of MIT. They explore the urgent need for AI literacy in K-12 education, the responsibilities of educators, parents, and policymakers in the AI era, and how Day of AI is building tools, curricula, and experiences that empower students to engage with AI critically and creatively. Jeff shares both inspiring examples and sobering warnings about the risks and rewards of AI in the hands of the next generation. Keywords Day of AI, MIT RAISE, responsible AI, AI literacy, K-12 education, student privacy, AI companions, Common Sense Media, AI policy, AI ethics, educational technology, AI curriculum, teacher training, creativity, critical thinking, digital natives, student agency, future of education, AI and the arts, cognitive offloading, generative AI, AI hallucinations, PISA 2029, AI festival Takeaways Day of AI is equipping teachers, students, and families with tools and curricula to understand and use AI safely, ethically, and productively. AI literacy must start early and span disciplines; it’s not just for coders or computer science classes. Students are already interacting with AI — often without adults realizing it — including the widespread use of AI companions. A core focus of Day of AI is helping students develop a healthy skepticism of AI tools, rather than blind trust. Writing, critical thinking, and domain knowledge are essential guardrails as students begin to use AI more frequently. The AI Festival and student policy simulation initiatives give youth a voice in shaping the future of AI governance. AI presents real risks — from bias and hallucinations to cognitive offloading and emotional detachment — especially for children. Higher education and vocational programs are beginning to respond to AI, but many are still behind the curve. Quotes “AI is more powerful than a car — and yet we’re throwing the keys to our kids without requiring any kind of driver’s ed.” “We want kids to be skeptical and savvy — not just passive consumers of AI.” “Students are already using AI companions, but most parents have no idea. That gap in awareness is dangerous.” “Writing is thinking. If we outsource writing, we risk outsourcing thought itself.” “The U.S. invented AI — but we risk falling behind on AI literacy if we don’t act now.” “Our goal isn’t to scare people. It’s to prepare them — and let young people lead where they’re ready.” Chapters 00:00 - Welcome and Introduction to Jeff Riley 01:11 - From Commissioner to Day of AI 02:52 - MIT Partnership and the Day of AI Mission 04:13 - Global Reach and the Need for AI Literacy 06:37 - Resources and Curriculum for Educators 08:18 - Defining Responsible AI for Kids and Schools 11:00 - AI Companions and the Parent Awareness Gap 13:51 - Critical Thinking and Cognitive Offloading 16:30 - Student Data Privacy and Vendor Scrutiny 21:03 - Encouraging Creativity and the Arts with AI 24:28 - PISA’s New AI Literacy Test and National Readiness 30:45 - Staying Human in the Age of AI 34:32 - Higher Ed’s Slow Adoption of AI Literacy 39:22 - Surfing the AI Wave: Teacher Buy-In First 42:35 - Student Voice in AI Policy 46:24 - The Ethics of AI Use in Interviews and Assessments 53:25 - Creativity, No-Code Tools, and Future Skills 55:18 - Final Thoughts and Festival Info Jeff Riley: https://www.linkedin.com/in/jeffrey-c-riley-a110608b Day of AI: https://dayofai.org For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 97: Challenging the AI Narrative and Redefining Digital Fluency with Jeff and MJ Pennington
Bob sits down with Jeff Pennington, former Chief Research Informatics Officer at the Children’s Hospital of Philadelphia (CHOP) and author of You Teach the Machines, and his daughter Mary Jane (MJ) Pennington, a recent Colby College graduate working in rural healthcare analytics. Jeff and MJ reflect on the real-time impact of AI across generations—from how Gen Z is navigating AI’s influence on learning and careers, to how large institutions are integrating AI technologies. They dig into themes of trust, disconnection, data quality, and what it truly means to be future-proof in the age of AI. Keywords AI literacy, Gen Z, future of work, healthcare AI, trusted data, responsible AI, education, automation, disconnection, skills, strategy, adoption, social media, transformation Takeaways Gen Z’s experience with AI is shaped by a rapid-fire sequence of disruptions: COVID, remote learning, and now Gen AI Both podcast and book You Teach the Machines serve as a “time capsule” for capturing AI’s societal impact Orgs are inadvertently cutting off AI-native talent from the workforce Misinformation, over-hype, and poor PR from big tech are fueling widespread public fear and distrust of AI AI adoption must move from top-down mandates to bottom-up innovation, empowering frontline workers Data quality is a foundational issue, especially in healthcare and other high-stakes domains Real opportunity is in leveraging AI to elevate human work through augmentation, creativity, and access Disconnection and over-reliance on AI are emerging as long-term social risks, especially for younger generations Quotes “It’s a universal fear now. Everyone has to ask: what makes you AI-proof?” “The vitality of democracy depends on popular knowledge of complex questions.” “We're not being given the option to say no to any of this.” “I’m 100% certain the current winners in AI will not be the winners in five to ten years.” Chapters 00:02 Welcome and Guest Introductions 00:48 MJ’s Path: From Computational Biology to Rural Healthcare 01:52 Why They Launched the Podcast You Teach the Machines 03:25 Jeff’s Work at CHOP and the Pediatric LLM Project 06:47 Making AI Understandable: The Book’s Purpose 09:11 Navigating Fear and Trust in AI Headlines 11:31 Gen Z, AI-Proof Careers, and Entry-Level Job Loss 16:33 Why Resilience is Gen Z’s Underrated Superpower 18:48 Disconnection, Dopamine, and the Social Cost of AI 22:42 AI’s PR Problem and the Survival Signals We're Ignoring 25:58 Chatbots as Addictive Companions: Where It Gets Dark 29:56 Choosing to Innovate: A More Hopeful AI Future 32:11 The Dirty Truth About Data Quality and Trust 36:20 How a Brooklyn Coffee Company Fine-Tuned AI with Their Own Data 40:12 Why “Throwing AI on It” Isn’t a Strategy 44:20 Measuring Productivity vs. Driving Meaningful Change 48:22 The Real ROI: Empowering People, Not Eliminating Them 53:26 Healthcare’s Lazy AI Priorities (and What We Should Do Instead) 57:12 How Gen Z Was Guided Toward Coding—And What Happens Now 59:37 Dependency, Education, and Democratizing Understanding 1:04:22 AI’s Impact on Educators, Students, and Assessment 1:07:03 The Real Threat Isn’t Just Job Loss—It’s Human Disconnection 1:10:01 Defaulting to AI: Why Saying "No" Is No Longer an Option 1:12:30 Final Thoughts and Where to Find Jeff and MJ’s Work Jeff Pennington: https://www.linkedin.com/in/penningtonjeff/ Mary Jane Pennington: https://www.linkedin.com/in/maryjane-pennington-31710a175/ You Teach The Machines (book): https://www.audible.com/pd/You-Teach-the-Machines-Audiobook/B0G27833N9 You Teach The Machines (podcast): https://open.spotify.com/show/4t6TNeuYTaEL1WbfU5wsI0?si=bb2b1ec0b53d4e4e For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 96: Building Learning Communities for a Responsible Future of Work with Enrique Rubio
Bob Pulver sits down with community builder and HR influencer Enrique Rubio, founder of Hacking HR. Enrique shares his journey from engineering to HR, his time building multiple global communities, and why he ultimately returned “home” to Hacking HR to pursue its mission of democratizing access to high-quality learning. Bob and Enrique discuss the explosion of AI programs, the danger of superficial “prompting” education, the urgent need for governance and ethics, and the risks organizations face when employees use AI without proper training or oversight. It’s an honest, energizing conversation about community, trust, and building a responsible future of work. Keywords Enrique Rubio, Hacking HR, Transform, community building, democratizing learning, HR capabilities, AI governance, AI ethics, shadow AI, responsible AI, critical thinking, AI literacy, organizational risk, data privacy, HR community, learning access, talent development Takeaways Hacking HR was founded to close capability gaps in HR and democratize access to world-class learning at affordable levels. The community’s growth accelerated during COVID when others paused events; Enrique filled the gap with accessible virtual learning. Many AI programs focus narrowly on prompting rather than teaching leaders to think, govern, and transform responsibly. Companies must assume employees and managers are already using AI and provide clear do’s and don’ts to mitigate risk. Untrained use of AI in hiring, promotions, and performance management poses serious liability and fairness concerns. Critical thinking is declining, and generative AI risks accelerating that trend unless individuals stay engaged in the reasoning process. Community must be built for the right reasons—transparency, purpose, and service—not just lead generation or monetization. AI strategies often overlook workforce readiness; literacy and governance are as important as tools and efficiency goals. Quotes “Hacking HR is home for me.” “We’re here to democratize access to great learning and great community.” “Prompting is becoming an obsolete skill—leaders need to learn how to think in the age of AI.” “Assume everyone creating something on a computer is using AI in some capacity.” “If managers make decisions based on AI without training, that’s a massive liability.” “Most AI strategies can be summarized in one line: we’re using AI to be more efficient and productive.” Chapters 00:00 Catching up and meeting in person at recent events 01:18 Enrique’s career journey and return to Hacking HR 04:43 Democratizing learning and supporting a global HR community 07:17 The early days of running virtual conferences alone 09:39 Why affordability and access are core to Hacking HR’s mission 13:13 The rise of AI programs and the noise in the market 15:58 Prompting vs. true strategic AI leadership 18:21 The importance of community intent and transparency 20:42 Training leaders to think, reskill, and govern in the age of AI 23:05 Dangers of data misuse, privacy gaps, and dark-web training sets 26:08 Critical thinking decline and AI’s impact on cognition 29:16 Trust, data provenance, and risks in recruiting use cases 31:48 The need for organizational AI manifestos 32:47 Managers using AI for people decisions without training 35:12 Why governance is essential for fairness and safety 39:12 The gap between stated AI strategies and people readiness 43:54 Accountability across the AI vendor chain 46:18 Who should lead AI inside organizations 49:28 Responsible innovation and redesigning work 53:06 Enrique’s personal AI tools and closing reflections Enrique Rubio: https://www.linkedin.com/in/rubioenrique Hacking HR: https://hackinghr.io For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 95: Confronting the Realities of Successful AI Transformation with Sandra Loughlin
Bob Pulver and Sandra Loughlin explore why most narratives about AI-driven job loss miss the mark and why true productivity gains require deep changes to processes, data, and people—not just new tools. Sandra breaks down the realities of synthetic experts, digital twins, and the limits of current enterprise data maturity, while offering a grounded, hopeful view of how humans and AI will evolve together. With clarity and nuance, she explains the four pillars of AI literacy, the future of work, and why leaning into AI—despite discomfort—is essential for progress. Keywords Sandra Loughlin, EPAM, learning science, transformation, AI maturity, synthetic agents, digital twins, job displacement, data infrastructure, process redesign, AI literacy, enterprise AI, productivity, organizational change, responsible innovation, cognitive load, future of work Takeaways Claims of massive AI-driven job loss overlook the real drivers: cost-cutting and reinvestment, not productivity gains. True AI value depends on re-engineering workflows, not automating isolated tasks. Synthetic experts and digital twins will reshape expertise, but context and judgment still require humans. Enterprise data bottlenecks—not technology—limit AI’s ability to scale. Humans need variability in cognitive load; eliminating all “mundane” work isn’t healthy or sustainable. AI natives—companies built around data from day one—pose real disruption threats to incumbents. Productivity gains may increase demand for work, not reduce it, echoing Jevons’ Paradox. AI literacy requires understanding technology, data, processes, and people—not just tools. Quotes “Only about one percent of the layoffs have been a direct result of productivity from AI.” “If you automate steps three and six of a process, the work just backs up at four and seven.” “Synthetic agents trained on true expertise are what people should be imagining—not email-writing bots.” “AI can’t reflect my judgment on a highly complex situation with layered context.” “To succeed with AI, we have to lean into the thing that scares us.” “Humans can’t sustain eight hours of high-intensity cognitive work—our brains literally need the boring stuff.” Chapters 00:00 Introduction and Sandra’s role at EPAM 01:39 Who EPAM serves and what their engineering teams deliver 03:40 Why companies misunderstand AI-driven job loss 07:28 Process bottlenecks and the real limits of automation 10:51 AI maturity in enterprises vs. AI natives 14:11 Why generic LLMs fail without specialized expertise 16:30 Synthetic agents and digital twins 18:30 What makes workplace AI truly dangerous—or transformative 23:20 Data challenges and the limits of enterprise context 26:30 Decision support vs. fully autonomous AI 31:48 How organizations should think about responsibility and design 34:21 AI natives and market disruption 36:28 Why humans must lean into AI despite discomfort 41:11 Human trust, cognition, and the need for low-intensity work 45:54 Responsible innovation and human-AI balance 50:27 Jevons’ Paradox and future work demand 54:25 Why HR disruption is coming—and why that can be good 58:15 The four pillars of AI literacy 01:02:05 Sandra’s favorite AI tools and closing thoughts Sandra Loughlin: https://www.linkedin.com/in/sandraloughlin EPAM: https://epam.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 94: Redefining Recruitment For a More Human-Centric Hiring Experience with Keith Langbo
Bob Pulver speaks with Keith Langbo, CEO and founder of Kelaca, about redefining recruitment in the AI era. Keith shares why he founded Kelaca to prioritize people over process, how core values like kindness and collaboration shape culture, and why trust and choice must be built into AI-powered recruiting tools. Bob and Keith explore evolving models of hiring, including fractional workforces, agentic systems, and data-informed decision-making — all rooted in a future where humans remain in control of the technology that serves them. Keywords Keith Langbo, Kelaca, recruitment, hiring, talent acquisition, AI in recruiting, agentic systems, culture add, core values, psychometrics, responsible AI, fractional workforce, gig economy, recruiting automation, candidate experience, structured interviews, Kira, human-centric design, AI trust, global hiring, digital agents, recruitment tech, NLP sourcing, recruiting innovation Takeaways Keith founded Kelaca to humanize the recruitment experience, treating people as partners — not products. Modern recruiting must shift from transactional, resume-driven models to more consultative, intelligence-based practices. AI’s greatest value lies in giving candidates and clients choice, not replacing humans — especially for real-time updates and communication preferences. Recruiters should move from “human-in-the-loop” to “humans in control” — using AI to augment but not automate judgment. Future hiring models may rely on digital agents representing both candidates and employers, enabling richer, data-driven matches. Core values — like kindness, accountability, and enthusiasm — are essential to maintaining culture across full-time and fractional teams. Structured data is key to overcoming bias and improving hiring quality, but psychometrics alone can't capture experience or growth. Many current tools automate broken processes; real innovation requires first rethinking what “better” hiring looks like. Quotes “I wanted to treat people like people, not like products.” “AI powered but human driven — that’s the experience I want to create.” “Resumes are broken. Interviews are often charisma contests. We can do better.” “Humans don’t just need to be in the loop — they need to be in control.” “I don’t care if you’re full-time or fractional. You still need to show kindness and a willingness to learn.” “We’re on the verge of bots talking to bots. That’s exciting — and terrifying.” Chapters 00:00 Introduction and Keith’s mission behind founding Kelaca 02:35 The candidate and client frustrations with traditional recruiting 05:10 Why resumes and interviews are broken — and what to do instead 07:10 Building feedback loops and AI-enabled candidate communication 10:45 Choice and context in AI tools: respecting human preference 13:44 From “human in the loop” to “human in control” 18:12 Agentic hiring and the rise of digital representation 25:10 Gig work and applying culture fit to fractional talent 29:34 Core values as the foundation of culture, not employment status 33:22 Responsible AI, fairness, and trust in hiring decisions 40:00 The hype cycle of recruiting tech and design thinking 42:56 AI as the modern calculator: from caution to capability 47:16 Global perspectives: AI adoption in US vs UK recruiting 53:08 Keith’s favorite AI tools and Kelaca’s new product, Kira 56:28 Closing thoughts and appreciation Keith Langbo: https://www.linkedin.com/in/keithlangbo Kelaca: https://kelaca.com/ KIRA Webinar Series: https://www.eventbrite.com/e/how-to-fix-the-first-step-in-hiring-to-drive-retention-introducing-kira-tickets-1853418256899 For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 93: Strengthening Human Connection to Build Trust in AI-Fueled Transformation with Dan Riley
Bob Pulver talks with Dan Riley, CEO and Co-founder of RADICL, about reshaping work through connection, trust, and clarity. From his roots as a punk rock musician to building Modern Survey and RADICL, Dan shares how creativity, curiosity, and courage fuel his leadership philosophy. Together, they explore the balance between human imperfection and technological advancement, why “high tech” must still serve human needs, and how organizations can build cultures that learn, listen, and adapt. The discussion spans themes of AI strategy, responsible design, employee listening, and the enduring value of genuine human connection. KeywordsDan Riley, RADICL, Modern Survey, Aon, employee listening, people analytics, connection, trust, AI ethics, human-AI collaboration, imperfection, curiosity, creativity, collective intelligence, organizational network analysis, people analytics world, Unleash, Transform, learning culture, human connection, responsible AI Takeaways Imperfection is a defining strength of humanity — and the source of creativity and innovation. The best technology solves real human problems in the flow of work, not just productivity gaps. AI is a mirror, amplifying human intent and behavior; if we lead with empathy and ethics, AI learns from that. Clarity, communication, and transparency are critical to avoiding “AI chaos” inside organizations. Continuous listening and connection are the new foundations for engagement and trust. Curiosity and conversation are essential skills for navigating the fast-moving future of work. The most effective teams balance diverse strengths rather than relying solely on “rock stars.” True progress happens when we keep the human conversation going — across roles, hierarchies, and perspectives. Quotes “I define myself as an artist first — a musician, filmmaker, who randomly fell into HR and tech.” “The most beautiful part about being human is that we’re imperfect — that’s where the best ideas come from.” “AI doesn’t fix our flaws; it amplifies them. It’s a mirror of how we show up.” “For technology to work, it has to be solving a human problem in the flow, not just adding to the stack.” “It’s okay to say, ‘We don’t have it all figured out yet’ — just be transparent about where you are.” “You’ll never regret having a conversation about something important.” Chapters 00:03 – Welcome and Dan’s background: from punk rock to HR tech 01:45 – Founding Modern Survey and RADICL’s mission around trust and impact 05:14 – The changing landscape of work 06:42 – Highlights from People Analytics World, Transform, and Unleash 09:50 – Rise of human connection as the dominant theme in work tech 13:10 – Clarity, communication, and the need for an AI strategy 16:19 – Productivity, balance, and reinvesting in people 18:36 – The risk of over-automation and the value of learning 22:16 – Teaching curiosity and critical thinking in an AI world 27:25 – Why open conversations about AI matter more than ever 33:51 – Employee listening, continuous dialogue, and the evolution of engagement 37:22 – How AI enhances understanding and connection between teams 40:06 – Organizational network analysis and adaptive learning 43:21 – Connection, mentorship, and collective intelligence 46:03 – AI as a mirror: amplification of human behavior and bias 48:36 – Building balanced, imperfect, and effective teams 51:48 – Tools, curiosity, and the limits of generative AI 55:35 – Trusting your judgment and maintaining critical thinking 56:34 – Staying human amid synthetic connection 57:45 – Closing reflections and the call for ongoing dialogue Dan Riley: https://www.linkedin.com/in/dan-riley-57b9431 RADICL: http://www.radiclwork.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 92: Appreciating the Importance of Self-Awareness to Human-AI Collaboration with Brad Topliff
Bob Pulver talks with creative technologist and entrepreneur Brad Topliff about building more human-centered systems for the AI era. Brad reflects on his nonlinear career—from early work in design and user experience, to many years at data and analytics company TIBCO, to his latest venture, SelfActual, which helps people and teams cultivate self-awareness, strengths, and alignment. Together, Bob and Brad explore the intersections of identity, trust, data ownership, and imagination in the workplace, and how understanding ourselves better can make AI more supportive—not more invasive. The conversation bridges psychology, technology, and ethics to imagine a future of work where humans remain firmly in control of their data, choices, and growth. Keywords Brad Topliff, SelfActual, TIBCO, self-awareness, positive psychology, data ownership, digital identity, AI ethics, imagination, human-centric design, trust, internal mobility, talent data, distributed identity, psychological safety, future of work Takeaways Self-awareness is foundational to effective teams and ethical AI use. Personal data about strengths and values should be owned by the individual, not the employer. AI can serve as a mirror and reframing tool, helping people build perspective—not replace human judgment. Internal mobility and growth depend on psychological safety and discretion around what employees share. Positive psychology and imagination can help teams align without reducing people to static personality types. The next era of HR tech should prioritize trust, transparency, and consent in how personal data is used. True human readiness for AI means combining durable human skills with thoughtful technology design. Quotes “I became a translator between the arts, the engineers, and leadership—and that’s carried through everything I’ve done.” “When you create data about yourself, who owns it? You? Your organization? The answer matters for trust.” “Most people think they’re self-aware—but only about twelve percent actually are.” “A job interview is two people sitting across the table from each other lying. We both present what we think the other wants to hear.” “If you give people autonomy and psychological safety, they’ll show up more fully as themselves.” “In the presence of trust, you don’t need security.” Chapters 00:03 – Welcome and Brad’s background in design, Apple roots, and TIBCO experience 05:46 – From UX to data: connecting human insight with enterprise technology 07:48 – Self-awareness, ownership of personal data, and building SelfActual 11:00 – The tension between authenticity, masking, and “bringing your whole self” to work 18:19 – Digital credentials, resumes, and rethinking candidate data ownership 23:08 – Internal mobility, verifiable credentials, and distributed identity 32:51 – Broad skills vs. specialization and the role of AI in talent matching 34:48 – Self-awareness, imagination, and positive psychology at work 46:48 – Rethinking internal mobility and autonomy for well-being and growth 49:26 – Human-centric AI readiness and the limits of automation 58:40 – Trust, security, and ownership of data in organizational AI systems 01:02:37 – Reflections on digital twins, imagination, and collective intelligence 01:08:06 – Closing thoughts and Self Actual’s human-first approach Brad Topliff: https://www.linkedin.com/in/bradtopliff SelfActual: https://selfactual.ai For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 91: Evolving Candidate Engagement from Conversational AI to Hiring Intelligence with Prem Kumar
Bob Pulver speaks with Prem Kumar, CEO and Co-founder of Humanly.io, about the evolution of hiring technology and the company's transition from a conversational AI tool to a full-fledged AI-powered hiring platform. Prem discusses the impact of Humanly’s recent acquisitions, expansion into post-hire engagement, and how they help employers address challenges in both high-volume and knowledge worker recruiting. Prem emphasizes the need for responsible, inclusive, and human-centric AI design, and explains how Humanly is helping organizations speed up hiring without sacrificing quality, fairness, or candidate experience. Keywords Humanly, conversational AI, AI interviewing, responsible AI, candidate experience, recruiting automation, employee engagement, AI acquisitions, ethics, RecFest, quality of hire, neurodiversity, candidate feedback, interview intelligence, AI coach, sourcing automation Takeaways Humanly’s evolution includes three strategic acquisitions that expand its platform from candidate screening to post-hire engagement. The company’s mission is to help employers talk to 100% of their applicants—not just the 5% that typically make it through—and reduce time-to-hire. Prem highlights how AI can reduce ghosting by creating 24/7 availability and real-time Q&A touchpoints for candidates. Interview feedback tools and coaching features are being developed for both candidates and recruiters. The importance of AI workflow integration is critical—tools must operate within a recruiter’s day-to-day flow to be effective. Humanly’s platform helps uncover quality-of-hire insights by connecting interview behaviors with long-term employee outcomes. The need for third-party AI audits and ethical guardrails. Insights from diverse candidate populations—including neurodiverse candidates and early-career talent—are shaping Humanly’s inclusive design practices. Quotes “It’s not human vs. AI—it’s AI vs. being ignored.” “Our goal is to reduce time-to-hire without compromising quality or fairness.” “We’re obsessed with the problem, not just the solution. That’s what keeps us grounded as we scale.” “Responsible AI should be audited just like SOC 2 or ISO—trust is foundational in hiring.” “The best interview for one role won’t be the same for another. That’s where personalization and learning matter.” “Everything we’ve done to improve access for neurodiverse candidates has made the experience better for everyone.” Chapters 00:00 – Intro and Prem’s Background 01:00 – Humanly's Origins and the Candidate Experience Gap 03:00 – 2025 Growth, Funding, and Acquisition Strategy 05:15 – From Conversational AI to Full-Funnel Hiring Platform 06:30 – High-Volume and Knowledge Workers 08:00 – Combating Ghosting and Delays with AI Speed 10:30 – Candidate Support and Interview Feedback 12:00 – Creating a 24/7 Conversational Layer for Applicants 13:45 – Data-Driven Hiring and Candidate Self-Selection 15:00 – Interview Coaching and Practice Tools 17:00 – Acquisitions and Platform Consolidation Feedback 18:45 – Responsible AI and Third-Party Auditing 21:00 – Partnering with Values-Aligned Teams and Investors 22:00 – Measuring Candidate Experience Across All Interactions 24:00 – Connecting Interview Behavior to Quality of Hire 26:00 – Coaching Recruiters and Interview Intelligence 28:45 – Expanding Into Post-Hire and Internal Conversations 30:00 – The Future of AI in HR and Internal Use Cases 34:00 – Designing Inclusively for Diverse Candidate Needs 36:00 – Modalities, Accessibility, and Equity in Interviewing 39:00 – Generative AI Reflections and Everyday Use 42:00 – Wrapping Up: What's Next for Humanly Prem Kumar: https://www.linkedin.com/in/premskumar Humanly: https://humanly.io For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 90: Exploring How AI Shifts Our Approach to Content and Authenticity with William Tincup
In this lively and wide-ranging conversation, Bob Pulver welcomes William Tincup, Co-founder of the WRKdefined Podcast Network, HR tech expert, and longtime friend of the show. Together they explore the evolution of podcasting, from its early scrappy days to today’s community-driven, AI-enhanced ecosystem. William shares his philosophy on personal authenticity, the rise of “PSO” — podcast search optimization — and why he believes we’re moving from search to conversation as the new model of discovery. They also dive into the ethics of personalization, digital identity, and privacy in a world where every click is data. From the practical uses of AI in podcast production to the philosophical questions about digital twins and second lives online, this episode blends humor, honesty, and the kind of deep reflection that defines both William and the WRKdefined network of shows. Keywords AI in podcasting, HR tech, authenticity, podcast search optimization, personalization, digital identity, privacy, digital twins, agentic internet, audience engagement, AI tools, discoverability, content creation, automation, human connection Takeaways Podcasting has evolved from a solo pursuit to a collaborative, AI-empowered craft. Optimization now means being discoverable by AI, not just by search engines. AI is already embedded throughout the creative workflow — from editing to marketing. Personal authenticity builds lasting trust in an algorithmic world. Digital twins and personalization raise questions about identity, privacy, and consent. Good content isn’t manipulation — it’s value shared with intention and empathy. True innovation comes from staying curious, playful, and human. Quotes “We’ve moved from search to conversation — people don’t Google anymore, they ask.” “Independent podcasting can be lonely, but community turns it into a craft.” “You can’t automate authenticity, but AI can help you amplify it.” “If your content has value, you’re not gaming the system — you’re serving people.” “Privacy is an illusion. So, make the ads you see worth your time.” “Digital twins may not replace us, but they’ll definitely outlive us.” Chapters 00:00 – Welcome and introduction 00:26 – William’s 25-year journey in HR tech and podcasting 02:47 – The evolution of Elevate Your AIQ and lessons from early episodes 05:25 – From SEO to PSO: Optimizing for AI discoverability 09:06 – Why AI-driven content isn’t manipulation when it adds real value 10:39 – Building community through the Work Defined Podcast Network 13:44 – Experimentation, creativity, and learning from other hosts 16:23 – How AI is transforming podcast production workflows 19:17 – Forgetting, hallucinations, and the limits of AI memory 21:48 – Digital twins and the blurred lines between personal and professional identity 26:32 – Authenticity online: the “one-dimensional self” 31:39 – Privacy illusions and the myth of online anonymity 33:57 – The “agentic internet” and the power of individual terms 38:25 – Advertising, personalization, and the importance of relevance 41:58 – Lazy marketing, weak signals, and bad outreach 46:46 – Aggregating knowledge and curating content intelligently 51:01 – Content creation, subscriptions, and the value of giving before selling 53:43 – AI, equity, and unlocking untapped talent 57:34 – Closing reflections and the case for empathy in technology William Tincup: https://www.linkedin.com/in/tincup WRKdefined: https://wrkdefined.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 89: Navigating the AI Doom Loop to Improve Hiring Outcomes with Dan Chait
Bob Pulver talks with Dan Chait, CEO and co-founder of Greenhouse, about how technology, especially AI, is reshaping the hiring landscape — for better and worse. Dan shares Greenhouse’s origin story and the company’s mission to help every organization become great at hiring through structured, data-driven, and fair processes. Together, they explore the “AI doom loop” of automated applications and AI-written job descriptions, the tension between efficiency and authenticity, and how innovations like Real Talent and Dream Job aim to bring trust, fairness, and humanity back into hiring. The conversation also touches on identity verification, prompt injection risks, AI ethics, and the evolving skills that will define the workforce of the future. Keywords AI hiring, structured hiring, recruiting technology, Greenhouse, Real Talent, Dream Job, hiring fairness, candidate experience, identity verification, deepfakes, AI doom loop, prompt injection, job seeker experience, future of work, skills-based hiring, authenticity in hiring, mission-driven leadership, HR tech Takeaways AI can enhance hiring but must not replace human connection and judgment. The “AI doom loop” is eroding trust between employers and candidates. Real Talent helps companies identify legitimate, high-intent applicants. Dream Job empowers real people to rise above automated applications. Employers should be transparent about how AI is used in hiring decisions if they want to build trust while improving their employer brand. The résumé’s role is fading as new ways of showcasing skills emerge. The future of hiring belongs to organizations that unite data, empathy, and trust. Quotes “Our mission is to help every company be great at hiring — and that means putting structure and fairness at the center.” “We’re caught in an AI doom loop where both sides are using automation to outsmart the other — and no one’s winning.” “You can’t automate authenticity. The human element is what stands out most in a world full of AI slop.” “We can do anything, but we can’t do everything. So we focus on what matters most: helping people connect in meaningful ways.” “It’s not about banning AI — it’s about setting clear expectations for how to use it responsibly.” “The death of the résumé has been predicted for decades, but maybe this is finally the time.” Chapters 00:00 – Welcome and introduction 00:44 – Greenhouse origin story and mission 02:50 – Lessons from Dan’s early career and the importance of structured hiring 06:00 – Hiring for skills and potential over pedigree 08:20 – How structured interviews and scorecards create fairness and better data 11:00 – Balancing mission and business success at Greenhouse 13:40 – Introducing Real Talent and solving the “AI doom loop” 16:50 – Detecting fraud, misrepresentation, and risk in job applications 18:45 – Partnership with Clear for verified identities 20:00 – Digital credentialing and transparency in hiring 22:30 – The “AI vs. AI” challenge: automation on both sides of the hiring equation 25:00 – Dream Job: Human intent meets AI efficiency 27:50 – The candidate experience crisis and how to fix it 30:20 – Why resumes and job descriptions are losing meaning 32:00 – Bringing humanity back to hiring in an AI-dominated world 34:30 – The future of the HR tech ecosystem and partnerships 40:00 – Agentic AI and the next frontier of recruiting technology 43:00 – The death of the résumé and what replaces it 47:00 – Skills, AI literacy, and the next generation of workers 52:00 – Setting clear expectations for AI use in hiring 55:00 – Personal AI use: augmenting human connection 56:00 – Closing thoughts and reflections Dan Chait: https://www.linkedin.com/in/dhchait Greenhouse: https://greenhouse.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 88: Advancing the Human-AI Relationship to Redesign Work with Agi Garaba
Bob Pulver speaks with Agi Garaba, Chief People Officer at UiPath, about the organization’s evolution from robotic process automation (RPA) to agentic AI and how that has impacted people, processes, and culture. Agi shares how HR can lead with a human-centric lens during AI transformation, the importance of AI literacy, and the practical steps UiPath is taking to balance innovation with responsible governance. This conversation blends strategic foresight with pragmatic execution and offers a roadmap for any leader navigating AI-enabled change. Keywords UiPath, agentic AI, automation, digital workers, RPA, HR technology, AI governance, AI literacy, talent acquisition, responsible AI, workforce transformation, human-centric design, reskilling, change management, future of work, CHRO, culture shift, AI readiness Takeaways UiPath’s transition from RPA to agentic automation marks a broader shift in how digital and human workers collaborate. HR has a central role in driving culture, trust, and adoption around emerging AI tools. A grassroots approach to agent development—crowdsourcing over 500 ideas from employees—ensures relevance and engagement. AI governance must evolve with technology; dedicated roles and frameworks are key to managing bias, access, and compliance. Building AI literacy across the organization—through tiered training and internal tooling—helps democratize innovation. Recruiting is transforming, but human relationships remain critical, especially in engaging passive candidates and senior-level talent. Not every task should be automated—some skills, like creative writing or candidate engagement, lose value when over-automated. Over-automation can create long-term talent gaps; junior roles are vital for succession and cultural continuity. Quotes “It’s not just a technology-led transformation. Culture has to be a core part of the AI journey.” “Over 50% of my HR team are citizen developers—we’ve built that capability into our DNA.” “We crowdsourced more than 500 ideas for agents across the organization—and everyone had a voice.” “Just because you can automate something doesn’t mean you should. Human context still matters.” “AI literacy is about imagination as much as it is about instruction. People need to see what’s possible.” “I’d like to create a workplace where human connection still matters—even as agents take on more tasks.” Chapters 00:00 – Introduction and Agi’s Career Path to UiPath 03:00 – From RPA to Agentic Automation 05:00 – HR at the Crossroads of Tech and Culture 07:15 – Org Design with Digital Coworkers 10:30 – Building Trust in Agentic Systems 13:40 – Responsible AI in HR Contexts 17:00 – Prioritizing and Tracking Agent Development 19:00 – Building AI Literacy Across the Organization 22:30 – From Vision to Execution: Pilots and Production 24:10 – Cross-functional Use Cases and Orchestration 26:45 – Governance, Compliance, and Continuous Oversight 30:00 – Redefining Human Skills in the Age of AI 33:00 – Knowing When Not to Automate 35:40 – Long-term Impacts on Junior Roles and Succession 38:45 – Strategic Workforce Planning and Digital Labor 41:00 – Agents in Recruiting: Limits and Opportunities 44:00 – Maintaining Human Relationships in Talent Acquisition 48:00 – Executive Search, Talent Advisors, and the Future of Recruiting 51:30 – Agi’s Personal Use and Reflections on GenAI 54:00 – Balancing Utility, Trust, and Critical Thinking 55:30 – Closing Thoughts and Wrap-up Agi Garaba: https://www.linkedin.com/in/agnesgaraba UiPath: https://uipath.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 87: Reimagining Learning Experiences in the AI Era with Lisa Yokana
In this compelling episode, Bob speaks with Lisa Yokana, a pioneering educator and global consultant, about how AI is reshaping the education landscape. Lisa shares her journey from traditional art and architecture teacher to building an experiential design lab, STEAM program, and social entrepreneurship course. Bob and Lisa explore how AI can serve as a catalyst for changing not just what we teach, but how we teach and why. With a focus on student agency, lifelong learning, and the shifting expectations of the future workforce, Lisa offers practical insights and inspiration for educators, parents, and community leaders looking to bring relevance, equity, and innovation into the classroom. Keywords AI in education, student agency, maker-centered learning, design thinking, STEAM, lifelong learning, workforce readiness, future of education, educational disruption, personalized learning, human skills, ethical AI, K-12 innovation Takeaways AI is a disruptor that can serve as a catalyst for rethinking teaching and learning. Student agency—not content mastery—is the core skill for future-ready learners. Traditional education systems are misaligned with the skills needed for the future workforce. Hands-on, project-based learning nurtures creativity, empathy, and real-world problem solving. Educators must experiment, fail forward, and reimagine their roles. Community support is critical for educational transformation. Ethics, responsible use, and digital literacy must be part of AI education, and must start early. AI levels the playing field for diverse learners but must be designed and used thoughtfully. Quotes “I never ask for permission. I just ask for forgiveness—and sometimes not even that.” “The big question is: what content is truly important for students to learn—and what can they master on their own?” “Agency is the kernel. If students have it, they can be resilient, adaptive, and self-directed.” “We want to create curious, empathetic humans who know they can change the world.” “AI doesn’t live a life—it can’t replace the embodied experience of being human.” “Schools need community conversations, not mandates, to adopt AI responsibly and equitably.” Chapters 00:00 – Lisa Yokana’s background and the early signs of educational misalignment 02:35 – Leaving the classroom to consult globally on innovation and mindset 03:25 – Reframing education: Skills vs. content 06:20 – Nurturing student agency and tackling big problems 09:01 – The disconnect between education and workforce needs 12:56 – How Lisa gained support and built the Scarsdale Design Lab 17:29 – Parent engagement and community buy-in 20:59 – Integrating AI in meaningful, ethical ways 24:06 – Educator mindsets and reframing pedagogy around AI 27:26 – AI use starts younger than we think 29:24 – Rethinking college in the age of AI 35:33 – Global patterns in AI adoption across education systems 39:20 – Addressing neurodiverse needs and accessibility 42:24 – Broadening community engagement and “thinking out loud” 43:38 – Responsible AI use and responsible design 49:11 – Big Tech’s role and thoughtful AI adoption in schools 53:03 – Final advice for parents, educators, and students Lisa Yokana: https://www.linkedin.com/in/lisa-yokana-81787ba Next World Learning Lab: https://nextworldlearninglab.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 86: Architecting the Future of Workforce Intelligence with Ben Zweig
Bob Pulver welcomes Ben Zweig, CEO of Revelio Labs and labor economist, for a deep dive into the evolving world of workforce analytics. Drawing from their overlapping experiences at IBM, Bob and Ben explore how the early days of cognitive computing sparked a journey toward greater transparency in labor market data. Ben explains how Revelio Labs is building a “Bloomberg Terminal” for workforce insights—grounded in publicly available data and powered by sophisticated taxonomies of occupations, tasks, and skills. Together, they examine the importance of job architecture, the promise and pitfalls of AI in workforce analytics, and the complexities of measuring contingent and freelance labor. Ben also shares a preview of his upcoming book, Job Architecture, and how LLMs are being used to redefine how organizations model and respond to changes in work itself. Keywords Revelio Labs, Ben Zweig, labor market data, job architecture, workforce analytics, strategic workforce planning, AI in HR, cognitive computing, IBM, labor economics, generative AI, skills-based hiring, public labor statistics, contingent workforce, gig economy, talent intelligence Takeaways Revelio Labs aims to recreate company-level workforce insights using publicly available employment data, similar to how Bloomberg transformed financial markets. Job architecture is built on three distinct but interrelated taxonomies: occupations, tasks, and skills. Many orgs think of skills as the building blocks of jobs, rather than attributes of people—a conceptual misstep that limits strategic planning. Gen AI is being used to score the automation vulnerability of tasks, enabling better insights into how work is changing. Strategic workforce planning is often misnamed—what most companies do is operational, not truly strategic. Contingent and freelance labor remains a blind spot in many traditional labor statistics and HR systems. The ability to adjust for data bias, reporting lags, and incomplete workforce signals is critical for creating trustworthy insights. Revelio’s Public Labor Statistics offers an independent source of macro labor data, complementing BLS and ADP methodologies. Quotes “Skills are attributes of people. Tasks are the building blocks of jobs.” “What’s exciting is that these are hard problems with big upside—unlike finance, where most of the low-hanging fruit is gone.” “We’re asking LLMs to tell us what they’re good at—and how confident they are in that judgment.” “Most organizations don’t need to pay $1M to build a taxonomy anymore. They just need the right approach and the right data.” “There’s no reason we shouldn’t be repurposing labor market insights to help individuals, not just institutions.” Chapters 00:00 — Intro and HR Tech reflections 02:08 — Ben’s background in economics and IBM analytics 06:43 — Why labor market data lags behind capital markets 09:22 — Building a flexible, bias-adjusted analytics stack 14:19 — Empathy for job seekers and candidate friction 16:10 — Why job discovery is fundamentally an information problem 19:53 — Unpacking job architecture: occupations, tasks, and skills 24:28 — Scoring AI’s impact on tasks, not skills 28:39 — Summarization vs. hallucination in generative AI 38:45 — Introducing RPLS: Revelio Public Labor Statistics 45:40 — The challenge of tracking freelance and contingent work 51:58 — Dealing with ghost data and workforce ambiguity 53:35 — Real-life uses of AI and Ben’s curiosity mindset 54:42 — Closing thoughts Ben Zweig: https://www.linkedin.com/in/ben-zweig Revelio Labs: https://reveliolabs.com Job Architecture (pre-order): https://www.amazon.com/Job-Architecture-Building-Workforce-Intelligence/dp/1394369069/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 85: Navigating AI Hiring Risks to Mitigate Adverse Impact with Emily Scace
Bob Pulver speaks with Emily Scace, Senior Legal Editor at Brightmine, about the intersection of AI, employment discrimination, and the evolving legal landscape. Emily shares insights on how federal, state, and global regulations are addressing bias in AI-driven hiring processes, the responsibilities employers and vendors face, and high-profile lawsuits shaping the conversation. They also discuss candidate experience, transparency, and the role of AI in pay equity and workforce fairness. Keywords AI hiring, employment discrimination, bias audits, compliance, workplace fairness, age discrimination, Title VII, DEI backlash, Workday lawsuit, SiriusXM lawsuit, EU AI Act, risk mitigation, HR technology, candidate experience Takeaways Employment discrimination laws apply at every stage of the talent lifecycle, from recruiting to termination. States like New York, Colorado, and California are setting the pace with new AI-focused compliance requirements. Employers face challenges managing a patchwork of state, federal, and international AI regulations. Recent lawsuits (Workday, SiriusXM) highlight risks of bias and disparate impact in AI-powered hiring. Candidate experience remains a critical yet often overlooked factor in mitigating both reputational and legal risk. Employers must balance the promise of AI with the responsibility to ensure fairness, accessibility, and transparency. Pay equity and transparency represent promising use cases where AI can drive positive change. Quotes “Discrimination can happen at any stage of the employment process.” “Some state laws go as far as requiring employers to proactively audit their AI tools for bias.” “Employers can’t just outsource their hiring funnel and blindly take the recommendations of AI.” “Class actions often succeed where individual discrimination claims struggle — they reveal systemic patterns.” “Even if candidates don’t get the job, a little touch of humanity goes a long way in making them feel respected.” “AI has real potential to help employers get to the root causes of pay inequity and model solutions.” Chapters 00:00 – Welcome and Introduction 00:36 – Emily’s background and role at Brightmine 02:38 – Overview of employment discrimination laws 05:27 – AI and compliance with existing legal frameworks 07:20 – California’s October regulations and employer liability 09:54 – Employer challenges with multi-state and global compliance 11:26 – Proactive vs reactive approaches to AI bias 13:06 – EU AI Act and global alignment strategies 15:37 – High-risk AI use cases in employment decisions 18:34 – DEI backlash and its impact on discrimination law 20:59 – Age discrimination and the Workday lawsuit 27:34 – Data, inference, and bias in AI hiring tools 31:25 – Candidate experience and black-box hiring systems 33:33 – Bias in interviews and the human role in hiring 37:43 – Transparency and feedback for candidates 42:44 – AI sourcing tools and recruiter responsibility 47:52 – Risks of misusing public AI tools in hiring 50:12 – The SiriusXM lawsuit and early legal developments 54:08 – Candidate engagement and communication gaps 59:19 – Emily’s views on AI tools and positive use cases Emily Scace: https://www.linkedin.com/in/emily-scace Brightmine: https://brightmine.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com

Ep 84: Orchestrating Responsible AI Transformation at Scale with Brandon Roberts
Bob speaks with Brandon Roberts, VP of Global People Product, Analytics, and AI at ServiceNow. Brandon shares how ServiceNow is navigating AI transformation from within its HR organization, balancing internal experimentation with client-informed innovation. They dive deep into responsible AI practices, strategic reskilling, and cross-functional collaboration, while unpacking key frameworks. Brandon also offers a preview of forthcoming research on the future impact of agentic AI on the workforce and shares actionable insights for HR and business leaders on how to lead with confidence, empathy, and clarity in a rapidly evolving landscape. Keywords Responsible AI, Agentic AI, HR transformation, AI Playbook, AI readiness, AI literacy, reskilling, upskilling, internal mobility, ServiceNow, people analytics, AI enablement, human-centric, HR-IT collaboration, future of work, AI governance, workforce planning Takeaways ServiceNow’s HR team is leading internal AI adoption while helping shape product development through real-world use and feedback. The AI Playbook for HR Leaders provides a practical framework that blends vision with tactical execution. Responsible AI isn’t just a compliance exercise—it's a continuous process requiring monitoring, iteration, and cross-functional governance. ServiceNow’s AI Control Tower centralizes use case tracking, governance status, adoption metrics, and value realization. The AI Heat Map approach helps identify which tasks are most ripe for AI augmentation and where reskilling efforts should focus. Strategic reskilling efforts, like transitioning HR operations roles into people partner roles, show how AI can enable—not replace—human potential. HR-IT collaboration is essential to enabling governance, product experimentation, and sustained transformation. Upcoming research from ServiceNow estimates 8 million U.S. roles will be transformed by agentic AI in the next five years. Quotes “This is a human transformation, not just a tech transformation.” “Responsible AI isn’t finished at launch—it needs to be continuously monitored.” “We call it the AI Heat Map—breaking down roles into tasks to see where AI can really help.” “Strategic workforce planning needs to evolve into strategic work planning.” “If AI doubles productivity, it should also unlock opportunities—not eliminate people.” “We want employees to feel safe using AI and know we’re committed to reskilling, not replacing them.” Chapters 00:00 – Intro and Brandon’s background 02:00 – Brandon’s unique role in HR and product feedback loops 03:20 – Internal vs. customer-led innovation 04:24 – AI solution inventory and governance 07:18 – AI readiness, literacy, and cultural change 10:00 – Role-based skill development 12:00 – Embedding Responsible AI across the enterprise 14:36 – Balancing innovation with ethical oversight 17:50 – HR and IT collaboration at ServiceNow 20:45 – Agentic AI and workforce planning 23:47 – Case study: reskilling HR ops into people partners 29:03 – Why internal talent is often overlooked 33:21 – The evolving value of analytics in the AI era 36:58 – Importance of data quality and governance 40:32 – How AI will transform every role and industry 46:03 – Banking and reinvesting AI-driven time savings 48:27 – How ServiceNow filters and prioritizes AI ideas 49:18 – Teaser: upcoming research on agentic AI’s impact 51:06 – Personal AI tools and what’s exciting (or scary) 54:04 – Final thoughts and call to action Brandon Roberts: https://www.linkedin.com/in/brandon-roberts-50796ba AI Playbook for HR Leaders: https://www.servicenow.com/content/dam/servicenow-assets/public/en-us/doc-type/resource-center/ebook/eb-hr-role-in-ai-transformation.pdf For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 83: Recalibrating Workforce Decisions via People Analytics and Gen AI with Cole Napper
Bob sits down with Cole Napper, VP of Research, Innovation & Talent Insights at Lightcast, to unpack the complex and rapidly evolving world of people analytics. From his eclectic career across industries to his recent book release and his co-hosting role on the very popular people analytics podcast, Directionally Correct, Cole shares practical insights and hard-earned wisdom on topics like AI readiness, org network analysis, and the intersection of data, influence, and leadership. Bob and Cole explore the paradoxes of the HR tech ecosystem, the stubborn persistence of unsolved problems, and why storytelling with data is really about persuasion. Cole also gets candid about the ethical responsibilities facing those who wield data, and why the future of workforce planning demands a complete rethink of how we study work itself. Keywords people analytics, talent intelligence, workforce planning, organizational network analysis, Lightcast, HR tech, Gen AI, quality of hire, job analysis, data storytelling, ethical AI, talent metrics, innovation, influence and persuasion, data infrastructure, Directionally Correct podcast Takeaways People analytics is only valuable when it influences decisions. Evolution of HR tech is moving from digitization to “value-first” intelligence. Effective storytelling with data is about persuasion and influence, not charts. Despite its maturity, organizational network analysis (ONA) remains underutilized. Most companies are underinvesting in data infrastructure, even as they chase AI initiatives. A flexible framework for measuring quality of hire is more useful than a rigid definition. Job analysis is having a renaissance as AI demands a deeper understanding of work. Ethics in people analytics isn't just about governance — it's about virtue and trust. Quotes “People analytics that doesn't influence decision-making is just overhead.” “We’re still digitizing HR — we haven’t even started to optimize it.” “Smart people assume their conclusions are self-evident, but that’s not how decisions are made.” “We need storytelling with data, but what we really need is persuasion with data.” “AI’s biggest challenge in HR isn’t capability — it’s data infrastructure and context.” “There’s no one watching the watchmen — ethics starts with the person in the seat.” “The study of work isn’t sexy, but it’s suddenly essential again.” Chapters 00:02 - Welcome and Intro to Cole Napper 00:55 - Cole’s Career Journey 03:29 - Patterns Across Industries and the Illusion of Uniqueness 06:51 - Community, Knowledge Sharing, and Power of Consortiums 08:57 - Why Smart People Still Struggle to Influence with Data 11:33 - From HR Tech to People Analytics: Digitization vs. Value Creation 13:51 - Data vs. Self-Interest: Why Decisions Get Blocked 15:49 - Untapped Potential of Org Network Analysis 18:54 - Use Cases: Building Teams, Referrals, and AI-Enhanced Sourcing 25:17 - Cole’s Book: Why Now, and What It’s About 28:13 - Shifting from Cost Center to Profit Center in People Analytics 32:22 - People Analytics Leading AI Adoption in HR 35:31 - Probabilistic Thinking, Determinism, and Predictive Pitfalls 36:55 - Measuring Quality of Hire: Frameworks vs. Definitions 40:41 - AI Assistants, Prescriptive Insights, and Reinforcement Learning 44:26 - Data Infrastructure as the Real AI Unlock 48:25 - Strategic Work Planning in an AI-Enabled World 52:25 - Who Will Watch the Watchmen? Ethics and Virtue in Analytics 55:28 - Predictions vs. Deductions and Parting Thoughts Cole Napper: https://www.linkedin.com/in/colenapper Directionally Correct: https://wrkdefined.com/podcast/directionally-correct "People Analytics": https://www.colenapper.com/book For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 82: Riding the Waves of Tech Innovation and Human-Centric Recruiting with Steve Levy
In this wide-ranging and thought-provoking conversation, Bob Pulver sits down with Steve Levy — recruiting veteran, technologist, and self-proclaimed “truth-teller” — to explore how talent, technology, and transformation intersect in today’s world of work. From the early days of expert systems and green-screen mainframes to the complexities of generative AI, Steve brings a rare blend of historical context, critical thinking, and humor. Together, they tackle topics like the ethics of candidate AI, bias in hiring platforms, skills-based hiring, the need for AI literacy, and why every recruiter needs to be more curious — and more human. Steve also shares lessons from his decades as a lifeguard at Jones Beach, and how that role shaped his instincts for protecting and empowering people — a theme that carries through everything he does in talent acquisition. Keywords AI in recruiting, expert systems, generative AI, candidate experience, skills-based hiring, talent ethics, AI literacy, job applications, bias in hiring, strategic workforce planning, Jones Beach lifeguard, recruiting tech, AI governance, human-centered design, talent intelligence, responsible AI Takeaways AI isn't new — it's just louder now: Steve recalls early experiences with AI-like systems in the 1980s and draws parallels to today’s hype and fear cycles. Recruiters need more curiosity, less fear: Avoiding AI won’t make it go away — recruiters must engage, experiment, and understand where AI fits. The real problem? Poor inputs: Most job descriptions and resumes are terrible — AI can’t solve for that without better human collaboration. Bias goes both ways: If employers can use AI to screen resumes, candidates can use it to write them — the key is transparency and integrity. Quality of hire starts with better intake: Steve emphasizes the importance of understanding real business problems, not just scanning for keywords. Candidate AI vs Employer AI: The current debate needs to move past gut reactions and toward practical, equitable frameworks. We need new roles and metrics: From TA ethicists to agentic governance leads, the future workforce demands new capabilities. Recruiting is about inclusion, not gatekeeping: Steve’s philosophy centers on humanizing the process and finding reasons to say “yes.” Quotes “If you can't audit it, don't automate it.” “The real challenge is working to include someone rather than exclude them.” “We're seeing artificial stupidity — not artificial intelligence.” “Being afraid of the ocean because of sharks is like avoiding AI because of hallucinations. You’ve got to get in the water.” “You can fight this, or you can plan for it. That’s it.” “Most people don't write good resumes. Most recruiters don't write good job descriptions. AI's not going to save us from that.” Chapters 00:00 – Opening & Reconnecting with Steve Levy 03:01 – Recruiting Before Computers & the Rise of Expert Systems 08:12 – What AI Is (and Isn’t): Fear, Hype & Progress 13:17 – Strategic TA in an Agentic Era 21:07 – AI Literacy, Education & Workforce Readiness 28:11 – Candidates Using AI vs. Employers Using AI 36:45 – Problems with Job Descriptions, Resumes & Gatekeeping 45:24 – Ethics, Transparency & Legal Implications in Hiring AI 54:10 – Talent Intelligence & Strategic Workforce Planning 1:05:33 – The SiriusXM Lawsuit & Candidate Frustration 1:15:57 – Lifeguard Lessons for the AI Age 1:20:12 – Final Thoughts on What Comes Next Steve Levy: https://www.linkedin.com/in/levyrecruits Steve’s Blog: https://recruitinginferno.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 81: Navigating a World of Signals, Systems, and Decision Intelligence with Marshall Kirkpatrick
In this lively and thought-provoking episode of Elevate Your AIQ, Bob Pulver reconnects with former collaborator and pioneering technologist Marshall Kirkpatrick. From their early work intersecting social data and influence to Marshall's latest AI-driven workflows, the conversation explores how human insight and machine intelligence are converging. Marshall shares real-world examples of using synthetic personas, market monitoring systems, and creative prompting strategies to uncover early signals, amplify strategic decisions, and reimagine everything from talent acquisition to environmental policy tracking. It's a conversation that navigates the emergence of machine learning for social insights to the frontier of AI innovation. Keywords AI-powered market monitoring, synthetic personas, talent acquisition, influencer marketing, social analytics, Claude, Perplexity, scenario planning, digital twins, quality of hire, Obsidian, strategic planning, generative AI, Delphi method, social capital Takeaways Marshall’s Journey: Marshall has spent his career identifying experts and building tools to surface valuable insights from social data. Synthetic Personas in Action: Using tools like Claude to create synthetic expert panels that evaluate documents, surface perspectives, and even challenge his own thinking. AI-Augmented Talent Scenarios: AI to simulate team compositions, evaluate candidates’ social behaviors, and even model potential collaboration outcomes. Monitoring the Market with AI: Building systems that detect early signals in markets — including environmental policy — using a mix of RSS, generative AI, and good old-fashioned curiosity. Digital Twins and Ownership: Exploring who owns the knowledge embedded in a “digital twin” of an employee — and how organizations might leverage them responsibly. Strategic Planning Reimagined: Using AI to model outcomes based on actions and strategies offers new ways to engage in scenario planning — not just in workforce contexts, but in grantmaking and innovation networks. Counterargument Workflows: Marshall shares his custom-built browser tool that generates counterarguments to online content using ChatGPT, promoting critical thinking and cognitive diversity. Quotes “I try to eat my own dog food — or drink my own champagne — when it comes to market monitoring.” “There’s gold in that data. We just have to figure out how to mine it responsibly and effectively.” “Synthetic personas are fast, cheap, and good enough to get the conversation started.” “What’s the strategy, what’s the output — and what’s the outcome? That’s where AI can help us model the messy middle.” “You can’t just look at someone’s codebase or resume — you need context, behavior, and communication patterns.” “I built a ‘counterargument bookmarklet’ to challenge the assumptions in what I’m reading online.”Chapters 00:00 – Welcome & Reconnection: Marshall’s Background and Journey 03:12 – AI Systems for Market Monitoring and Early Signal Detection 10:58 – The Evolution of Social Analytics and Social Capital 16:39 – Talent Acquisition, AI, and the Value of Social Footprints 24:57 – Scenario Planning with Synthetic Personas 32:05 – Driving Innovation through Grant Monitoring and Project Pairing 40:41 – From Digital Twins to Ethical Implications of AI in the Workforce 50:15 – Counterargument Workflows and Critical Thinking with AI 58:21 – Closing Thoughts: Responsible AI, Community, and the Road Ahead Marshall Kirkpatrick: https://www.linkedin.com/in/marshallkirkpatrick Earth Catalyst: https://www.earthcatalyst.co/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 80: Challenging AI Hype and Building Trusted Solutions with Colette Mason
Bob sits down with Colette Mason, a tech veteran with 40 years of experience in computing and a deep understanding of human behavior through her work in coaching and neuro-linguistic programming. Together, they explore the hype and reality around AI adoption, automation myths, and why “responsible by design” is more than just a catchphrase. Colette shares her perspectives on human-centric design, AI literacy, and how to keep authenticity intact in an AI-powered world. With warmth, humor, and real-world wisdom, this conversation brings clarity to an often-confusing landscape—and reminds us that technology should augment rather than replace what only humans can and should do. Keywords AI literacy, human-centric design, responsible AI, automation, digital assistants, content generation, neuro-linguistic programming, human-AI collaboration, ethical AI, digital tools, Colette Mason, trusted AI Takeaways AI ≠ Automation: Many tasks called "AI" are really just workflow automation. It's important to distinguish between the two. Human-Centered Design Matters: AI tools should reflect human needs, limitations, and behaviors, especially when used in sensitive areas like hiring. The Hype Is Real—and Misleading: Over-promising on AI capabilities can hurt trust and morale. Colette urges a more grounded, realistic view. Use AI Where It Helps, Not Where It Hurts: Delegate the boring stuff, but don’t let AI speak in your voice without oversight. Authenticity Still Wins: Whether it's writing, speaking, or building a personal brand, being transparent about AI involvement builds trust. Responsible Use Is Everyone’s Job: From solo entrepreneurs to large enterprises, we all have a role in building and using trustworthy AI. Design for Real People: Most users aren’t tech-savvy. Tools need to be intuitive, safe, and aware of different user needs—including neurodiversity. Top Quotes “I model people’s brains because I’m a hypnotherapist—and that’s actually a superpower in tech.” “There’s a lot of AI that isn’t really AI. It’s just automation with lipstick.” “The system has to read the room—it can’t just say ‘you didn’t give me all the info, mate.’” “Regular people need AI that helps them make it to their kids’ school play—not impress YouTube bros.” “Don’t replace yourself with AI. Do less, but make it more you.” “We’re not in the early innings—we’re still in warmups when it comes to AI literacy.” Chapters 00:00 – Intro and Colette's Background 02:00 – AI Hype vs. Reality: What’s Really Happening 06:00 – Automation ≠ AI: Breaking the Misconceptions 10:30 – Building Human-Centered Tools and Workflows 17:00 – Responsible AI and “Designing for Safety” 24:00 – Fairness in Hiring and Interviewing with AI 30:00 – The Quality of AI-Generated Content 38:00 – Being Transparent About AI Use 44:00 – Ethics, Reputation, and the Court of Public Opinion 50:00 – Global Perspectives on AI Regulation 54:30 – Favorite Tools and Real-World Applications 01:00:00 – The Future of Personality in AI Models 01:03:30 – Closing Thoughts Colette Mason: https://www.linkedin.com/in/colettemason Clever Clogs AI: https://www.cleverclogsai.com/ Ditch Rework, Build Teamwork: https://www.amazon.com/Ditch-Rework-Build-Teamwork-Principles-ebook/dp/B0FBL4C6ZP For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 79: Leveraging AI to Transform Knowledge into Enterprise Intelligence with Dan Stradtman
Bob sits down with Dan Stradtman, Chief Marketing Officer at Bloomfire, to explore the evolving landscape of knowledge management (KM) in the age of AI. Dan brings a wealth of experience from Fortune 500 giants like Walmart, GE, and Lubrizol (Berkshire Hathaway). They discuss how often tacit and institutional knowledge is undervalued and underutilized. Bob and Dan unpack Bloomfire’s concept of “Enterprise Intelligence” and its new framework for treating knowledge as a measurable, strategic asset. They also cover the risks of overlooking tacit knowledge, how AI adoption is changing who leads knowledge initiatives, and the crucial role of ethics, trust, culture, and human-centricity in the enterprise AI journey. Keywords Enterprise Intelligence, Knowledge Management, Tacit Knowledge, Bloomfire, Enterprise AI, Digital Assistants, Leadership, Strategic Workforce Planning, Culture, Cognitive Diversity, Collective Intelligence, Human-Centricity, Trust, Future of Work, Ethical AI Key Takeaways Knowledge is an asset: Companies often fail to treat knowledge—especially tacit knowledge—as a formal asset on the balance sheet. AI elevates knowledge management: The rise of AI has pushed KM into the C-suite, with a growing emphasis on enterprise-wide integration. Tacit knowledge loss is costly: Orgs lose significant institutional knowledge without realizing its overall impact. Trust drives knowledge sharing: Cultural factors, psychological safety, and leadership behavior directly impact how willing employees are to share knowledge. Remote work challenges knowledge flow: For early-career professionals, the hybrid environment can inhibit mentorship and exposure to institutional wisdom. Digital advisors & AI agents are rising: As digital personas and assistants become more advanced, organizations must consider the ethical implications. SWP evolution: Strategic workforce planning should evolve into strategic work planning, balancing both digital and human contributions. Measuring value requires new KPIs: Bloomfire’s framework ties knowledge value to tangible outcomes like revenue per employee, onboarding speed, and OKR attainment. Cognitive diversity is crucial: Varied perspectives and experiences within teams lead to better problem-solving and innovation. AI is integral to the future of work: It will require a blend of human and AI capabilities and should remain human-centric. “Tacit knowledge is going out the door, and companies are underestimating how consequential that is.” “AI systems are only as good as the quality of the knowledge you feed them. It’s still garbage in, garbage out.” “Organizations need to think of themselves as ecosystems, where people and digital agents work together.” “Cognitive diversity is going to be critical—otherwise everyone’s just prompting the same chatbot.” Chapters 00:00 – Welcome and Guest Introduction 02:00 – Dan’s Career Journey and Road to Bloomfire 05:00 – What Bloomfire Does and the Rise of Enterprise Intelligence 08:30 – The Evolution of KM 12:00 – AI’s Role in Driving KM to the C-Suite 15:00 – Tacit Knowledge: The Hidden Asset 18:30 – The Value of Human-Centric Design in AI Strategy 24:00 – Skills Atrophy and the Impact of Remote Work 27:30 – Cognitive Diversity in the Age of AI 30:00 – Capturing Institutional Knowledge Through Tech 35:00 – Lessons from Early Expertise Discovery Tools 38:00 – Digital Advisors and the Risk of Redundancy 44:00 – Meeting Intelligence and Ethical Knowledge Capture 47:00 – Trust, Culture, and the Role of Leadership 55:00 – Experimentation, Risk, and AI Governance 59:00 – Innovation, Strategy, and the Future of Work Dan Stradtman: https://www.linkedin.com/in/danstradtman Bloomfire: https://bloomfire.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 78: Identifying Untapped Tech Talent and Innovating Responsibly with Casey Fox
Bob Pulver and Casey Fox discuss the evolution of Tekletics, a company focused on bridging the gap between untapped talent and technology careers. Casey shares his journey from a business major to the CTO of Tekletics, emphasizing the importance of work ethic, innate human skills, and the role of AI in talent acquisition and development. They explore the challenges and opportunities presented by AI in the workforce, the need for a culture of responsibility, and the importance of human potential in the age of automation and AI. Keywords Tekletics, AI, workforce development, talent acquisition, future of work, technology, coding bootcamp, human potential, automation, career transition Takeaways Tekletics aims to bridge the gap between untapped talent and technology careers. The evolution of Tekletics reflects the changing landscape of work and technology. Work ethic and a strong interest in technology are crucial for success in tech roles. AI is transforming talent acquisition and development processes. Organizations need to foster a culture of AI responsibility and ethical use. The future of work will involve collaboration between humans and AI. There are untapped talent pools that organizations can explore for hiring. Training programs should focus on real projects rather than traditional boot camps. AI tools can enhance productivity but must be used with caution. Building a diverse and skilled workforce is essential for the future. Sound bites "We need to tap into untapped human potential." "We want to build a culture of AI responsibility." "We have to help build the next generation of SMEs." Chapters 00:00 Introduction to Tekletics and Casey Fox's Journey 03:23 The Evolution of Tekletics and Its Mission 08:46 Understanding the Future of Work and Career Pivots 16:02 Identifying Talent and Building Skills for the Future 20:00 Adapting to Changing Client Demands and AI Integration 25:09 Navigating the Talent Ecosystem and Future Opportunities 33:27 Navigating the Dystopian Path of AI 34:20 Fostering Curiosity in the Age of AI 36:22 The Evolution of Learning: Libraries vs. AI 38:32 Empowering Employees with AI: Trust vs. Control 40:33 The Human Element in AI Adoption 42:01 Building Trust in AI: Data Privacy Concerns 44:41 The Role of AI in Coding: A Double-Edged Sword 47:50 The Future of Junior Roles in a Tech-Driven World 51:17 Building a Foundation for Future Generations 55:28 AI Literacy: Understanding Risks and Opportunities 59:25 The Future of Work: Humans and AI Collaboration 01:02:55 Tekletics: Bridging the Gap for Future Talent Casey Fox: https://www.linkedin.com/in/foxcase Tekletics: https://www.tekletics.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 77: Distilling Interview Data into Hiring Intelligence with Siadhal Magos
Bob Pulver sits down with Siadhal Magos, CEO and Co-founder of Metaview, to explore how AI can unlock a more structured, scalable, and insight-rich approach to hiring. Siadhal brings deep experience from the world of product and people to unpack why interviews—despite being central to business success—remain one of the most inconsistent and intuition-driven processes in organizations. The conversation spans the origins of Metaview, the real cost of poor hiring decisions, and the gap between what hiring teams think they’re evaluating versus what they’re actually reacting to. They also discuss the difference between feedback and insight, the value of AI as an interview companion rather than a replacement, and why structured processes don’t have to come at the expense of candidate experience. Keywords Siadhal Magos, Metaview, interview intelligence, hiring decisions, quality of hire, feedback loops, AI in recruiting, structured interviews, candidate experience, decision-making, hiring bias, interview analytics, talent strategy, hiring intelligence, decision intelligence, summarization Key Takeaways Hiring is high-stakes—but under-instrumented. Most teams still rely on memory, gut feel, and incomplete notes. AI can elevate—not replace—human judgment. Metaview focuses on supporting better decisions, not automating them away.Interview feedback ≠ insight. Capturing what was said and how it was evaluated creates a far more useful learning loop.Consistency doesn’t mean rigidity. Structured interviews can still be candidate-friendly and personalized. Good hiring mirrors good product thinking. Siadhal shares how tight feedback loops, data, and clarity fuel both. Curiosity is a superpower in early-stage building. The Metaview journey is a case study in iterating with empathy. Top Quotes “The way most interviews are run is far too fragile for the importance of the decisions being made.” “We’re not trying to replace human judgment—we’re trying to give it better inputs.” “Hiring is one of the most strategic things a company does, but it’s often the least measured.” “It’s not just about the candidate’s answers—it’s about how the interviewer responded to them.” “Great teams are built through consistent, reflective decision-making—not just instincts.” Chapters 00:00 – Opening and Siadhal’s early career in product and people 06:45 – Why interviews are broken and how Metaview began 13:10 – Feedback vs. insight: a new lens on interview data 20:00 – The ethics and implications of recording interviews 26:35 – Human judgment + AI: striking the right balance 33:20 – Structured interviewing and candidate experience 40:50 – Building with curiosity: lessons from Metaview’s journey 47:00 – Final thoughts on quality of hire, trust, and team growth Siadhal Magos: https://linkedin.com/in/siadhal Metaview: https://metaview.ai For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 76: Keys to Transforming Organizations for Human and AI Collaboration with Kristi Broom
Bob Pulver talks with Kristi Broom, Co-founder of Rising Tide Cooperative and a seasoned transformation leader. Kristi's career has spanned EdTech, L&D, operations, and HR. She shares her journey from building early learning platforms to leading organizational change at scale—all while staying grounded in her passion for helping people grow. The conversation explores what it means to be a generalist in an era of specialization, how to design systems that support behavior change, and the role of curiosity, structure, and storytelling in navigating innovation. Whether you’re working in HR, technology, transformation, or operations, this episode offers a fresh perspective on what it really takes to lead through complexity and build a future-ready organization. Keywords Kristi Broom, transformation, L&D, EdTech, generalist career, operational leadership, organizational change, innovation, behavior change, people development, systems thinking, storytelling in business, AI readiness, future of work Key Takeaways Generalists are wired for transformation. Kristi explains how her generalist background allowed her to connect across silos and take on high-stakes change. EdTech roots shaped a systems mindset. Building early online learning systems taught her how to think structurally while staying flexible. Curiosity drives innovation. Kristi shares how being a “possibility thinker” has helped her evolve with each new challenge. Real transformation requires structure and story. Without storytelling, even well-designed systems fail to resonate. Growth is personal. Her work has always centered on helping people grow—whether through development programs, leadership models, or building intentional cultures. Pacing matters. When leading transformation, knowing when to accelerate—and when to pause—is a crucial leadership skill. Top Quotes “I’ve always been a generalist—wired to think across disciplines, across people, across possibilities.” “Structure is important, but story is the engine that helps people move.” “I’m obsessed with seeing people grow. That’s where the energy comes from for me.” “You can't automate your way out of transformation—you still need leadership.” “My curiosity is a superpower, and I’ve learned how to use it to help people and organizations evolve.” Chapters 00:00 – Opening and Kristi’s origin story in education and technology 08:15 – Becoming a generalist: curiosity, complexity, and change 14:30 – Early EdTech and learning platform design 20:45 – Making growth personal: L&D as a human imperative 27:10 – Leading transformation: structure, pacing, and storytelling 36:00 – Building future-ready systems without losing your people 42:55 – Final reflections on innovation, possibility, and what’s next Kristi Broom: https://www.linkedin.com/in/kristibroom Rising Tide Cooperative: https://risingtidecooperative.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 75: Reimagining Talent Pipelines from Data to Decisions with Andrew Gadomski
Bob Pulver welcomes Andrew Gadomski, Founder and Managing Director of Aspen Analytics, to talk about how AI and data science are reshaping the future of HR, talent, and organizational decision-making. Andrew is a veteran workforce data strategist who shares candid, practical insights on what it really takes for companies to evolve their data maturity, why LLMs can’t be treated like magic wands or oracles, and how to make AI work with your people, not instead of them. From “decision gravity” to the fallacy of talent pipeline management, this episode is a masterclass in balancing technological possibility with human nuance. Keywords Andrew Gadomski, Aspen Analytics, workforce analytics, decision intelligence, data maturity, talent strategy, HR transformation, responsible AI, talent pipeline, future of work Key Takeaways The difference between using AI as a prediction tool vs. a decision-making tool—and why that matters “Decision gravity” and how influence travels through an organization Why most organizations aren’t “data mature” and how to assess where you really are LLMs (like ChatGPT) aren’t ready to make decisions—they need guardrails, oversight, and smart humans The myth of a linear talent pipeline and how hiring should actually work Data-informed != data-driven: what smart decision-making really looks like How to frame AI adoption around people, not just tools Sound Bites “Data is a tool for influence—not control.” “If you don't trust the decision, you won't trust the data.” “AI will tell you what it would do. It won't tell you what you should do.” Chapters 00:00 – Welcome and Guest Intro Overview of Andrew’s role at Aspen Analytics and his approach to data-driven transformation. 05:10 – What “Data Maturity” Really Means Why most organizations overestimate their data capabilities—and what a mature approach actually involves. 12:40 – Decision Gravity and Influence Mapping How organizational decisions really get made and why influence—not hierarchy—is what drives outcomes. 21:25 – Prediction vs. Decision: The Role of AI Understanding how AI fits into human workflows, and why relying on LLMs for decisions is risky. 31:00 – The Limits of Large Language Models (LLMs) Where LLMs can be helpful, where they hallucinate, and how to set trust boundaries around their output. 40:30 – Hiring Myths and the Talent Pipeline Fallacy Why treating hiring like a “pipeline” misses the mark, and what a better model could look like. 52:15 – Building Trust Through Responsible AI How trust, transparency, and cultural readiness shape whether AI is embraced—or ignored. 63:00 – Reframing Success: Learning, Not Just Automation Closing reflections on how organizations can prioritize adaptability, curiosity, and practical value in the AI era. 72:30 – Final Takeaways and Where to Learn More Andrew’s parting thoughts on decision support, ethical data use, and leading with intentionality. Andrew Gadomski: https://www.linkedin.com/in/andrewgadomski Aspen Analytics: https://www.aspenanalytics.io/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form

Ep 74: Fostering Community, Curiosity, and Critical Thinking for AI Readiness with Chris Maurio
Bob Pulver chats with Chris Maurio, Vice President of the Oracle solutions practice at Argano, about the evolution of HR technology, the role of AI in enterprise transformation, and the importance of human-centric approaches in AI governance and ethics. They discuss the need for AI literacy in education and the significance of community in fostering collaboration and learning. The conversation emphasizes the balance between technology and human interaction, the future of AI in education, and the importance of curiosity in learning AI. Keywords AI, HR technology, Oracle, automation, ethics, governance, education, community, human-centric, innovation, curiosity, creativity Takeaways Chris Maurio has a background in HR and technology implementation. The HR tech space has evolved significantly over the past 20 years. Oracle is a leader in enterprise transformation and innovation. AI can enhance HR processes but requires careful governance. Human-centric design is crucial in AI applications. AI literacy should be part of onboarding and compliance training. Community plays a vital role in AI learning and collaboration. Education about AI should start early in schools. Curiosity drives innovation and effective use of AI. The future of work will involve a blend of human and machine capabilities. Sound bites "The innovation is incredible." "Community is huge in this regard." "Curiosity is key to learning AI." Chapters 00:00 Introduction and Background 02:10 The Evolution of HR Technology 04:56 Oracle's Role in Enterprise Transformation 09:08 AI Integration in HR Systems 12:18 Governance and Compliance in AI 15:23 Human-Centric AI Design 19:19 AI Literacy and Training 22:19 Hands-On Learning with AI 27:28 The Future of AI Education 30:06 Closing Thoughts on AI and Education 30:45 Teaching Critical Thinking in the Age of AI 33:11 Integrating AI into Education 35:28 Balancing Screen Time and Learning 39:28 Fostering Curiosity and Critical Thinking 44:10 Navigating Trust and Ethics in AI 49:38 The Role of AI in Everyday Life 52:50 AI Literacy in Organizations 56:21 Community and Ethical AI Development Chris Maurio: https://www.linkedin.com/in/chrismaurio Argano: https://argano.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 73: Building Human-Centric AI to Improve Outcomes Across the Talent Lifecycle With Michael Palys and Mike Patchen
Bob Pulver hosts Michael Palys and Mike Patchen, Co-founders of Colleva, to discuss the innovative use of AI in coaching and talent acquisition. They explore the evolution of AI coaching, its applications across various industries, and the importance of addressing AI bias. The discussion highlights the potential of AI to enhance employee training, improve sales performance, and streamline recruitment processes, while emphasizing the need for responsible AI governance. The conversation concludes with insights into upcoming events and the future of AI in the workplace. Keywords AI coaching, Colleva, talent acquisition, employee insights, AI bias, sales coaching, healthcare training, performance management, responsible AI, technology summit Takeaways Colleva started with AI coaching and has expanded its use cases. AI can play multiple roles in coaching and training. The platform is designed for high-performance environments. Colleva is being used in healthcare for role-playing scenarios. Sales coaching is a natural extension of their AI capabilities. AI can help standardize training and improve performance management. The platform allows for personalized and customized training experiences. AI bias is a critical concern that needs to be addressed. Colleva aims to empower employees rather than replace human interaction. The future of AI in recruitment is about providing fair opportunities. Sound bites "It's more situation specific." "We can practice and get it right." "Bias mitigation is critical in AI solutions." Chapters 00:00 Introduction to Responsible AI and Colleva 02:14 The Genesis of Colleva and AI Coaching 04:49 Expanding Use Cases: From Coaching to Talent Co-Pilot 07:34 Target Markets: Financial Services and Healthcare 10:20 Sales Coaching: Enhancing Revenue Generation 13:04 Employee Insights and Performance Management 15:40 Customizing AI Interactions for Organizations 18:17 User Experience and Feedback on AI Avatars 22:45 From Marketing to Selling: The Evolution of Resumes 23:44 The Importance of 3D Candidate Presentation 25:26 Human-Centric Recruitment: Fairness and Respect 26:46 AI in Recruitment: Enhancing Human Interaction 30:13 AI Governance: Addressing Bias and Trust 34:16 Building Trust in AI Solutions 38:26 Creating a Unified Talent Experience 39:56 AI in Education: Tools for the Next Generation 45:25 Upcoming Events: The NYU Coaching and Technology Summit Michael Palys: https://www.linkedin.com/in/mpalys/ Mike Patchen: https://www.linkedin.com/in/michael-patchen-39713214/ Colleva: https://www.colleva.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 72: Redefining How We Vet Top Talent and Identify AI Readiness with Adam Jackson
Bob Pulver chats with Adam Jackson, Founder and CEO of Braintrust, about the evolution of AI in recruitment, the importance of trustworthy technology, and the future of the job market. They discuss the user experience of AI interviews, the necessity of human oversight, and the skills needed for the future workforce. Adam shares insights on how AI can streamline recruitment processes and the challenges organizations face in adopting these technologies. Keywords AI, recruitment, Braintrust, Adam Jackson, job market, technology, bias, user experience, talent acquisition, future of work Takeaways Adam Jackson has a rich entrepreneurial background in tech startups. Braintrust is a tech jobs marketplace that uses AI to streamline recruitment. Braintrust's AI Recruiter (AIR) solution is designed for fairness and scalability. Candidates have reported feeling more relaxed during AI interviews. AI can significantly reduce scheduling hassles in recruitment. The job market is shifting towards AI-native skills. AI can help sift through large volumes of applicants effectively. Human oversight remains crucial in the recruitment process. Organizations face skepticism when adopting AI technologies. Continuous learning and adaptation are essential for future job seekers. Sound Bites "It's all about reducing friction." "AI should and will do it." "The skepticism is well deserved." Chapters 00:00 Introduction to Adam Jackson and His Journey 02:28 The Evolution of Brain Trust and AI Integration 05:07 Addressing Bias in AI Recruiting 07:48 Candidate Comfort with AI in Recruitment 10:41 The Role of AI in Streamlining Recruitment Processes 13:27 Innovative Approaches to Candidate Assessment 16:19 Outbound Sourcing and Community Engagement 22:24 The Shift in Job Seeking Dynamics 24:58 Navigating the Evolving Job Market 28:20 Humans and AI: A Collaborative Future 30:47 Skepticism and Trust in AI Adoption 33:54 Assessing Candidate Skills with AI 37:58 Generative AI: Opportunities and Challenges Adam Jackson: https://www.linkedin.com/in/ajackson Braintrust: https://braintrust.com For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 71: What Improv Can Teach Us About Culture, Adaptability and Human-Centricity with Joel Zeff
Bob Pulver sits down with Joel Zeff, who shares his unique journey from journalism to improv comedy and keynote speaking. They explore the importance of adaptability, the role of fun in the workplace, and how improv can teach valuable lessons about change and teamwork. Joel discusses his recent book, which encapsulates these themes, and emphasizes the need for individuals to embrace their potential in the face of change. Bob and Joel also explore the themes of human potential, leadership, and the integration of AI in the workplace. They discuss the importance of embracing change, the role of leaders in fostering team success, and the necessity of human skills in an increasingly automated world. The dialogue emphasizes the need for flexibility, adaptability, and the freedom to make mistakes as essential components of personal and professional growth. Keywords improv, comedy, leadership, adaptability, workplace culture, AI, teamwork, human potential, leadership, AI integration, adaptability, education, creativity, change management, team success, human skills Takeaways Joel Zeff transitioned from journalism to comedy and keynote speaking. Improv teaches valuable skills about adaptability and change. Fun in the workplace is essential for success and fulfillment. Creating a positive work culture involves supporting and empowering others. Joel's book encapsulates messages about leadership and teamwork. Change is inevitable, and how we react to it determines our success. Being prepared for change is crucial in any work environment. Fun means different things to different people. Embracing one's potential is key in the face of technological change. The journey of learning and adapting is ongoing and essential. Human potential is about taking control of one's path. Embracing change is crucial for success. Leaders should focus on helping their teams succeed. AI integration requires a balance between human and digital labor. Human skills are essential in the age of AI. Leadership is about building trust and relationships. Education must adapt to include AI literacy. Flexibility and adaptability are key in a changing world. Making mistakes is part of the learning process. Improv teaches us to be present and engaged. Sound Bites "You choose to be prepared for change." "Stay in the game, find success." "How do I help my team be successful?" "AI should enhance human collaboration." "The future of work is human plus AI." "Education needs to incorporate AI." "Flexibility and adaptability are key." "Embrace mistakes to find freedom." "Control how we react to change." Chapters 00:00 The Journey of Joel Zeff: From Journalism to Comedy 02:59 The Power of Improv: Embracing Change and Adaptability 05:50 The Importance of Fun in the Workplace 09:07 Creating a Positive Work Culture 12:11 Unpacking the Book: Messages of Leadership and Teamwork 14:54 Navigating Change in the Age of AI 18:06 Human Potential: Embracing Skills and Adaptability 26:25 Embracing Human Potential 27:17 Staying in the Game: Embracing Change 28:47 The Role of Leadership in Team Success 30:40 Navigating AI and Human Collaboration 33:10 The Importance of Human Skills in AI Integration 36:39 The Future of AI Leadership 40:30 AI in Creative Processes 43:37 Education and AI: A New Paradigm 45:31 Flexibility and Adaptability in a Changing World 49:05 The Freedom to Make Mistakes Joel Zeff: https://www.joelzeff.com/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com What’s Your AIQ? Assessment interest form Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 70: How AI is Reframing Identity, Employment, and Intelligence with Chris Heuer
Keywords AI, human-centric design, digital twin, personal intelligence, team dynamics, ethics, data ownership, future of work, meetings, AI literacy, personal intelligence, compensation structures, knowledge management, trust, transparency, ethical AI, human-centric AI Summary In this episode, Bob Pulver and Chris Heuer explore the intersection of artificial intelligence and humanity, discussing the implications of AI on teamwork, employment, and the future of work. They dive into concepts like digital twins and personal intelligence, emphasizing human-centric design and ethical considerations in AI integration. The conversation highlights the need for a shift in how we view collaboration and collective intelligence in the age of AI, while advocating for a more thoughtful approach to technology that prioritizes human values and relationships. Bob and Chris talk about AI literacy and readiness, and the implications for compensation structures in the workplace. They discuss ethical considerations surrounding AI design and use, and AI's influence on both career trajectories and organizational dynamics. Takeaways AI must be integrated with a focus on humanity. Understanding the long-term implications of AI is crucial. Digital twins represent personalized AI versions of ourselves. Terminology in AI shapes public perception and understanding. Ethical considerations are vital in AI development. Data ownership and privacy are essential in the AI landscape. Meetings should evolve into collaborative conversations. Calibrated trust in AI systems is necessary for effective use. AI literacy encompasses ethical use and understanding privacy implications. Explaining AI's logic is crucial for trust and transparency. Sharing knowledge enhances personal and organizational growth. Compensation structures must adapt to recognize personal intelligence. Digital twins can optimize decision-making in teams. Curation of personal intelligence data is essential for quality. Licensing personal intelligence could reshape employment models. Trust is vital in navigating AI's impact on work. Organizations must prioritize human-centric approaches to AI. Sound Bites "AI needs to start grounded in our humanity." "What does it mean for our roles in organizations?" "We need to consider human factors in AI integration." "AI can be a teammate, not just a tool." "We need to protect the value of human work." "Calibrated trust is essential with AI agents." "Sharing is power." "You could license me and charge $200 an hour." "Terminology has meaning, right?" Chapters 00:00 Introduction to AI and Humanity 02:59 The Role of AI in Team Dynamics 06:01 Understanding Digital Twins and Personal Intelligence 08:54 The Implications of AI on Employment 12:10 Navigating Terminology in AI 14:58 The Future of Work and AI Integration 18:06 Personal Intelligence vs. Collective Intelligence 20:57 The Ethics of AI and Data Ownership 23:46 The Evolution of Meetings in the AI Era 27:02 Calibrated Trust in AI Systems 34:37 Understanding AI Literacy and Its Components 39:05 The Role of Personal Intelligence in Compensation 45:49 Navigating Knowledge Management and Digital Twins 50:04 Curating Personal Intelligence for Future Value 56:34 The Future of Work: Licensing Personal Intelligence 01:01:11 Trust and Transparency in the Age of AI Chris Heuer: https://www.linkedin.com/in/chrisheuer Team Flow Institute: https://teamflow.institute/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Substack: https://elevateyouraiq.substack.com Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 69: Prioritizing Learning Agility for the Current and Future Workforce with Professor Anna Tavis
Bob sits down with Dr. Anna Tavis, a leader in human capital management and higher education. They discuss the evolution of HR skills, the impact of AI on education, and the importance of personalized learning. Dr. Tavis emphasizes the need for educational institutions to adapt to the changing landscape and prepare students for future challenges. The conversation also touches on the role of technology in coaching and mentorship, and the necessity of developing critical thinking skills in learners. Anna and Bob discuss how data-driven feedback can enhance learning and performance, the importance of trust in AI coaching, and the evolving role of universities in preparing students for a dynamic job market. They emphasize the need for continuous learning and adaptability in an increasingly automated world, while addressing concerns about job displacement and the importance of human connection in the workplace. Takeaways Education curriculum must evolve rapidly to keep pace with technological advancements. Personalized learning and coaching are essential for student success. Educational institutions need to focus on the purpose and outcomes of their programs. Technology can bridge gaps in education, especially for diverse learners. Critical thinking skills are crucial for navigating the complexities of the modern world. AI tools can enhance the coaching and mentorship experience. Flexibility in educational programs allows for personalized learning journeys. Integration of technology in education is inevitable and necessary. Future skills will require a blend of knowledge, adaptability, and critical thinking. Data access allows for continuous feedback in education. AI can transform performance management processes. Learning agility is essential in the modern workforce. Trust in AI coaching is growing among younger generations. Universities must adapt to the changing job landscape. Every job will become an augmented job with AI. Human connection remains vital in the workplace. AI can help individuals develop self-reflection skills. The role of middle management will evolve, not disappear. Continuous learning and adaptability are key to success. Sound Bites "The educational model we've built is changing." "Education is not just about knowledge." "We need to create more flexibility for students." "We can provide feedback just in time based on data." "You need the ability to continuously learn." "Trust in AI coaching is higher than in human coaching." "Every job will be an augmented job." "We just need to be thinking about what's next." Chapters 00:00 Introduction to Dr. Anna Tavis and Her Work 03:09 Evolution of Human Capital Management 06:05 Education in the Age of AI 09:03 Importance of Purpose in Education 11:53 Personalized Learning and Coaching 14:53 Role of Technology in Education 18:04 Future of Coaching and Mentorship 20:54 Bridging Gaps in Education with AI 24:04 Need for Change in Educational Practices 27:25 Power of Data in Education 30:33 Transforming Performance Management with AI 33:13 Learning Agility and the Role of AI 36:49 Trust in AI and Coaching 39:27 Role of Universities in a Changing Landscape 43:19 Redesigning Work in the Age of AI Dr. Anna Tavis: https://www.linkedin.com/in/annatavis NYU SPS: https://www.sps.nyu.edu/ NYU Coaching & Technology Summit: https://www.sps.nyu.edu/homepage/academics/divisions-and-departments/division-of-programs-in-business/human-capital-management/coaching-and-technology-summit.html For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.

Ep 68: Sustaining High Performance, Wellbeing, and Human-Centricity with Bianca Errigo
Keywords Bianca Errigo, Human OS, wellbeing, performance, sustainability, mental health, coaching, technology, entrepreneurship, human-centricity, AI, organizational change, employee engagement, psychological safety, workplace culture Summary Bob Pulver speaks with Bianca Errigo, founder of Human OS, about her journey from tech sales to wellness expert and entrepreneur. Bianca shares her personal experiences with burnout and mental health, leading her to create Human OS, a platform that combines wellbeing, performance, and sustainability through personalized coaching and technology. The conversation explores the importance of support systems in maintaining high performance and how technology can enhance individual wellbeing. Bob and Bianca address organizational challenges with change, the impact of AI on workplace dynamics, and the importance of individual readiness and managerial support. The discussion emphasizes the need for open communication, co-creation, and a focus on human-centric values in the face of technological advancements. Takeaways Bianca's career path is diverse and shaped by personal experiences. Burnout led Bianca to focus on mental and physical health. Human OS aims to make wellbeing accessible to everyone. The platform supports organizations in creating healthy work environments. Data tracking and habit tracking are key features of Human OS. Support and vulnerability are crucial for personal growth. Bianca has extensive coaching experience across various demographics. The intersection of technology and human expertise is vital for success. Sustainable high performance is achievable without compromising health. Entrepreneurship comes with its own set of challenges and rewards. Human OS combines wellbeing, performance, and sustainability. Most wellbeing solutions are reactive and not personalized. Organizations need to address cultural issues proactively. AI is often misunderstood and can create fear. Education about AI is crucial for both employers and employees. Change can be positive if managed correctly. Individual readiness impacts how change is received. Success is defined differently for each individual. Engagement and belonging are key to a fulfilled workforce. Co-creation with employees enhances AI adoption. Sound Bites "AI can improve human experience and skills." "Communication is key during times of change." "We are all in this together with AI." "Organizations need to communicate openly about AI." "Involving employees as co-creators is crucial." "Don't bury your head in the sand about AI." Chapters 00:00 Introduction to Bianca Errigo and Her Journey 03:04 The Evolution of Human OS and Its Mission 05:45 The Intersection of Wellbeing and Technology 10:00 Introduction to Human OS and Its Mission 12:11 Navigating Organizational Change and Culture 14:49 The Role of AI in Workplace Transformation 22:55 Individual Readiness and Managerial Impact 28:39 Embracing Change and Defining Success 35:28 Fostering Engagement and Co-Creation in AI Adoption 46:27 Preparing for the Future Workforce Bianca Errigo: https://www.linkedin.com/in/biancaerrigo HumanOS: https://humanos.co.uk/ For advisory work and marketing inquiries: Bob Pulver: https://linkedin.com/in/bobpulver Elevate Your AIQ: https://elevateyouraiq.com Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.