PLAY PODCASTS
Lunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

73 episodes — Page 1 of 2

Ep 73Model Drift to Bias and Discrimination: The Many Risks of AI: Part 2

In Part 2 of this Lunchtime BABLing series on AI risk, Dr. Shea Brown, CEO of BABL AI, is joined again by Jeffery Recker to continue their lightning-round exploration of the real challenges organizations face when deploying AI. This episode dives deeper into critical concepts such as model drift, bias vs. discrimination, and growing explainability gaps in modern AI systems — especially as organizations increasingly rely on large language models and automated decision-making tools. Together, they discuss: -What model drift is and how organizations can detect and manage it -Why users (not just developers) should understand performance drift in AI systems -The important distinction between statistical bias and illegal discrimination -How bias can emerge even when demographic data isn’t explicitly used -The role of diversity of thought and structured risk assessments in uncovering AI risks -Why explainability is becoming harder as AI models grow more complex -The trade-offs between performance, trust, fairness, and regulatory compliance The conversation also explores broader questions around how AI is being used today, the limitations of “black-box” systems, and why validation, testing, and governance are becoming essential capabilities for organizations adopting AI at scale. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 23, 202635 min

Ep 72Data Poisoning to Hallucinations: The Many Risks of AI Part 1

Data Poisoning to Hallucinations: The Many Risks of AI | Part 1 In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems. From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale. But this episode doesn’t stop at definitions. Shea and Jeffery also explore: - The difference between direct vs. indirect prompt injection - Whether AI hallucinations can ever truly be “solved” - Why AI isn’t a truth machine - Whether we’re using AI the wrong way - What responsible validation should look like in enterprise AI deployment As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational. If you're working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave. 🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 9, 202634 min

Ep 71AI Test, Evaluation, & Red Teaming Specialist Bootcamp

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, & Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals. Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts. Dr. Brown explains: -Why AI testing, evaluation, and red teaming are essential for high-risk AI systems -How BABL AI developed its internal, risk-driven testing and assurance frameworks -The difference between auditing AI systems and directly evaluating and validating them -What participants will learn during the five-week, hands-on bootcamp -The prerequisites, structure, and technical depth of the program -How this bootcamp will evolve into BABL’s new AI Test, Evaluation, & Red Teaming Specialist Certification -This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming. -Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks. Take the test to see if you are a good candidate for the AI Test, Evaluation, & Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV Learn more about BABL AI Certifications: www.babl.ai About Lunchtime BABLing: Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 23, 202628 min

Ep 70An Interview with Mert Çuhadaroğlu

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI & Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor. Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds. Together, Shea and Mert discuss: What makes BABL AI’s AI & Algorithm Auditor Certification different from other AI governance programs Whether you need a technical background to succeed in AI auditing The real-world demand for AI auditors and AI governance professionals Common career paths for certification graduates What students actually do in the capstone project (including LLM and generative AI use cases) How BABL AI’s certifications compare to other industry credentials An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 22, 202534 min

Ep 69Diving into the AI Compliance Officer

What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔 In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan. Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you. In this episode, they discuss: What a Chief AI Compliance Officer role looks like in practice – Why it often lands on general counsel, chief compliance officers, or chief AI officers – Why this work can’t be owned by one person alone The 3-part structure of BABL AI’s AI Compliance Officer Program AI foundations – Governance, AI management systems, policies, procedures, and documentation Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis Continuous monitoring & measurement – Keeping up with self-learning, changing AI systems over time How to build an AI system inventory and triage risk – Simple rubric for identifying high, medium, and low-risk AI systems – When to treat a system as “high risk” by default – Why simplicity is the antidote to feeling overwhelmed Key AI risks every organization should know about – Data poisoning and how malicious instructions can sneak into your systems – Shadow AI (employees using unapproved tools like personal ChatGPT accounts) – Model & data drift and why “it worked when we launched it” isn’t good enough – How these risks connect to reputation, regulatory exposure, and business strategy Why governance, risk & compliance (GRC) is not a “brake” on innovation – How good governance actually lets you move faster and more confidently – The value of a “SWAT team” style AI compliance function vs. going it alone Who should watch/listen? General counsel, chief compliance officers, chief risk officers Chief AI / data / technology leaders Product owners building AI-powered tools Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠 Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 8, 202542 min

Ep 68Implementing AI into Your Career

In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question: How do you actually implement AI into your career… without losing yourself (or your job) in the process? Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week. 🎧 In this episode, we cover: How to start using large language models (LLMs) and agents in your day-to-day work Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.) How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think) How to set boundaries with AI so it augments your work, not your identity or mental health Mindset shifts for people who don’t feel “technical” but still need to adapt Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Nov 24, 202547 min

Ep 67AI, Training & the Job Market

In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring. 🎧 What you’ll learn Why AI anxiety is spiking—and how to respond with deliberate upskilling The #1 meta-skill: building a strong filter (concise, expert-informed outputs > AI slop) How AI literacy translates to any role (marketing, people ops, compliance, product) Practical ways to pivot toward Responsible AI / AI assurance / AI auditing Why specialization beats chasing every trend (go narrow, go deep, then pivot) The value of community: mentorship, peer feedback, and portfolio/capstone work Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Nov 10, 202545 min

Ep 66AI and Scheduling Optimization with Leon Ingelse

From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots & Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing. 🔑 What we cover Hard vs. soft constraints – why “can’t” and “prefer not to” need different math Digital twins – building a virtual copy of a business before you touch the real one Fairness & “karma” scheduling – balancing preferences over weeks, months, years Transparency & compliance – explaining a timetable (and the laws baked into it) Human-in-the-loop vs. full automation – when you still want a person pressing “publish” Optimization ≠ LLMs – where stochastic AI falls short and formal models shine The future of Dots & Lines and why bespoke solutions often beat off-the-shelf products Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jul 14, 202541 min

Ep 65How to Break Into AI Governance?

Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field. 🌟 What you'll learn in this episode ✅ What AI governance really is (and why it matters in every business using AI) ✅ The 3 main career paths into AI governance: Dedicated governance roles Expanding your current role to include AI oversight Building something new as an entrepreneur/intrapreneur ✅ Do you need to be technical? How much? ✅ The real skills hiring managers want ✅ How to transition from zero experience to credible candidate ✅ Why governance is essential for scaling AI safely and responsibly 🧭 Key themes Hands-on learning: You have to use AI to govern AI Systems thinking: Understanding how decisions get made at scale Risk awareness: The #1 thing employers want Building your profile: Projects, credentials, volunteering, networking Niche strategy: Why specializing beats general buzzwords Marathon mindset: This is not a quick certification cash-in Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jun 30, 202548 min

Ep 64AI Ethicist Reacts to Different Uses of AI

In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online. From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism. 🎧 Topics include: Can AI help someone get out of jail? Is it ethical to use AI-generated avatars in court? Talking to an AI version of a dead loved one—grief or avoidance? Should AI replace your therapist? Professors using ChatGPT to grade student essays AI as your relationship coach (or third wheel) Confirmation bias and the future of learning in the AI age 💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jun 16, 202537 min

Ep 63What is ISO 42001?

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems. Whether you're leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand: What ISO 42001 is and why it matters How it fits into global AI governance (including the EU AI Act and U.S. regulations) Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement Common challenges organizations face when adopting it Practical first steps for implementation, even for startups and resource-limited teams 💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jun 2, 202526 min

Ep 62A New Framework to Assess the Business VALUE of AI

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?” Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework: - Visualize your operations - Ask the right questions - Link to AI capabilities - Understand feasibility & risk - Experiment & evaluate This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast. 💡 Key topics: - The difference between asking about tools vs. asking about value - Why most AI projects fail—and how to avoid it - How AI governance can create value, not just mitigate risk - The importance of metrics, pilot testing, and customer focus - Why being proactive beats being reactive in AI implementation Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

May 19, 202532 min

Ep 61The Importance of AI Governance

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage. They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes. Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills. If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Apr 28, 202540 min

Ep 60Ensuring LLM Safety

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Apr 7, 202527 min

Ep 59Explainability of AI

What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 31, 202534 min

Ep 58AI’s Impact on Democracy

In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems. - What happens when personalized content becomes political propaganda? - Is YouTube the new social media without us realizing it? - Can regulations keep up with AI’s accelerating influence? - And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity? This episode dives into: - The unintended consequences of algorithmic curation - The collapse of objective reality in the digital age - AI-driven misinformation in elections - The tension between regulation and free speech - Global responses—from Finland’s education system to the EU AI Act - What society can (and should) do to fight back Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss. 🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 24, 202545 min

Ep 57AI Literacy

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world. Topics covered: The evolution of AI education and BABL AI’s new subscription model for training & certifications. Why AI auditing skills are becoming essential for professionals across industries. How AI governance roles will shape the future of business leadership. The impact of AI on workforce transition and how individuals can future-proof their careers. The EU AI Act’s new AI literacy requirements—what they mean for organizations. Want to level up your AI knowledge? Check out BABL AI’s courses & certifications! 🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 17, 202520 min

Ep 56Shea Visits RightsCon 2025

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance. What’s in this episode? ✅ RightsCon Recap – How AI has taken over the human rights agenda ✅ AI Auditing & Accountability – Why organizations need to prove AI compliance ✅ Investors Are Paying Attention – Why AI risk management is becoming a priority ✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI ✅ The International Association of Algorithmic Auditors – A new professional field is emerging 🚀 If you're passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 3, 202524 min

Ep 55A Conversation with Ezra Schwartz on UX Design

Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI. In this episode, you'll discover: • Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech. • Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction. • The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users. • Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations. If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen. 👉 Connect with Ezra Schwartz: Website: https://www.artandtech.com LinkedIn: https://www.linkedin.com/in/ezraschwartz Responsible AgeTech Conference I’m organizing: https://responsible-agetech.org Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 24, 202533 min

Ep 54Interview with Mahesh Chandra Mukkamala from Quantpi

🇩🇪 People can join Quantpi's "RAI in Action" event series kicking off in Germany in March: 👉 https://www.quantpi.com/resources/events 🇺🇸 U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI": 👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Sign up for our courses today: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai 🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️ In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations. 💡 Topics Covered: ✔️ What is black box AI testing, and why is it crucial? ✔️ How Quantpi ensures model robustness and fairness across different AI systems ✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance ✔️ Challenges businesses face in AI model evaluation and best practices for testing ✔️ Career insights for aspiring AI governance professionals With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable. 🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI! 📢 Listen to the podcast on all major podcast streaming platforms 📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/ 📌 Follow Quantpi for more AI insights: https://www.quantpi.com Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 17, 202527 min

Ep 53EU AI Act Comes Into Effect and the Regulatory Uncertainty of North America

Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks: The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5). Compliance Timelines & What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare. North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses. Risk, Ethics & the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market. Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 10, 202551 min

Ep 52Interview with Abhi Sanka

🎙️ Lunchtime BABLing: Interview with Abhi Sanka 🎙️ Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI's inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft. Explore Abhi's insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society. 📌 Highlights: Abhi’s academic and professional path to responsible AI. The challenges of auditing agentic AI and aligning governance frameworks. The importance of community and collaboration in advancing responsible AI. Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community. Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation! 🔗 Abhi's Linkedin: https://www.linkedin.com/in/abhisanka/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 27, 202533 min

Ep 51An Interview with Soribel Feliz

🎙️ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. 🚀 From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. 🌍✨ 🔑 Key Highlights: Soribel's career leap from diplomacy to tech and AI policy. The ethical dilemmas and societal impacts of AI she’s witnessed firsthand. Her thoughts on AI literacy gaps and the need for growth mindset education. Practical advice for those transitioning into AI or confronting job uncertainties. 🌟 This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI. 📌 Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below. Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 13, 202524 min

Ep 502024 an AI Year in Review

🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️ Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI! In this final episode of the year, the trio dives into: 🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024. 📈 How large language models are redefining audits and operational workflows. 🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide. 📚 The rise of AI literacy and the "race for competency" in businesses and society. 🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025. Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session! 🎉 Looking Ahead to 2025 What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead. 📌 Key Takeaways: AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations. Education and competency-building are essential to navigating the changing AI landscape. The global regulatory response is reshaping how AI is developed, deployed, and audited. Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker 💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 30, 202440 min

Ep 49An Interview with Aleksandr Tiulkanov

In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀 🔍 What’s in this episode? The transition from legal tech to AI compliance. Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act. How the EU AI Act fits into Europe’s product safety legislation. The challenges and confusion around conformity assessments and AI literacy requirements. Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals. 🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001. 📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info 🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 16, 202443 min

Ep 48The future of jobs with AI

In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work? From fears of job displacement to the rise of entirely new roles, the trio explores: 🔹 How AI will reshape industries and automate parts of our jobs. 🔹 The importance of upskilling to stay competitive in an AI-driven world. 🔹 Emerging career paths in responsible AI, compliance, and risk management. 🔹 The delicate balance between technological disruption and human creativity. 📌 Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you. 👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role. 🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 2, 202434 min

Ep 47How will a Trump Presidency Impact AI Regulation

🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations? In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖 Key topics include: Federal deregulation and the push for state-level AI governance. The potential repeal of Biden's executive order on AI. Implications for organizations navigating a fragmented compliance framework. The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies. How deregulation might affect innovation, litigation, and risk management in AI development. This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Nov 18, 202436 min

Ep 46A BABL Deep Dive

Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice. Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems. This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management. Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Nov 4, 202451 min

Ep 45AI Literacy Requirements of the EU AI Act

👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Courses Mentioned: 1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce 2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems 3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification 4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements. Throughout the episode, Dr. Brown covers: AI literacy obligations for providers and deployers under the EU AI Act. The importance of AI literacy in ensuring compliance. An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Oct 21, 202420 min

Ep 44AI Frenzy: Will It Really Replace Our Jobs?

In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs? Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is? They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond. If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Oct 7, 202416 min

Ep 43How NIST Might Help Deloitte With the FTC

Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations. In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand. 📍 Topics discussed: Deloitte’s Medicaid eligibility system in Texas The role of the FTC and the NIST AI Risk Management Framework How AI governance can safeguard against unintentional harm Why proactive risk management is key, even for non-AI systems What companies can learn from this case to improve compliance and oversight Tune in now and stay ahead of the curve! 🔊✨ 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Sep 23, 202432 min

Ep 42'The Regulatory Landscape for AI in Insurance'

Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Sep 2, 202434 min

Ep 41Where to Get Started with the EU AI Act: Part Two

In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there. In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including: Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records. Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken. Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU. Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance. Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy. Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future. 🔗 Key Topics Discussed: What documentation and transparency measures are required to demonstrate compliance? How can businesses effectively maintain and update these records? How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation? What are the biggest challenges you foresee in complying with the EU AI Act? What resources or support mechanisms are being provided to businesses to help them comply with the new regulations? How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector? What are the penalties for non-compliance, and how will they be determined and applied? What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy? What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve? How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary? 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Aug 12, 202446 min

Ep 40Where to Get Started with the EU AI Act: Part One

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations. With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations. The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act. Don't miss this informative session to ensure your organization is ready for the changes ahead! 🔗 Key Topics Discussed: What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? What impact will this have outside the EU? What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act? Are there any particular high-risk AI systems that require special attention under the new regulations? How do you assess and manage the risks associated with AI systems? What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations? How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips! 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. #AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAI Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Aug 12, 202421 min

Ep 39Building Trust in AI

Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI." Episode Highlights: Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors. AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations. Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust. Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability. Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices. Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders. If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jul 8, 202430 min

Ep 38NYC AI Bias Law: One Year In and What to Consider

Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace. Episode Highlights: Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers. Year One Insights: What has been learned from the first year of compliance, including common challenges and successes. Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance. Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits. Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively. This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan. 🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jul 1, 202420 min

Ep 37Understanding Colorado's New AI Consumer Protection Law

In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers. Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20." Explore our courses here: https://courses.babl.ai/ For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights. Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/ Timestamps: 00:21 - Welcome and Introductions 00:43 - Overview of Colorado's AI Consumer Protection Law 01:52 - State vs. Federal Initiatives in AI Regulation 04:00 - Detailed Discussion on the Law's Provisions 07:02 - Risk Management and Compliance Techniques 09:51 - Importance of Proper Documentation 12:21 - Developer and Deployer Obligations 17:12 - Strategies for Public Disclosure and Risk Notification 20:48 - Annual Impact Assessments 22:44 - Transparency in AI Decision-Making 24:05 - Consumer Rights in AI Decisions 26:03 - Public Disclosure Requirements 28:36 - Final Thoughts and Takeaways Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jun 3, 202431 min

Ep 36NIST AI Risk Management Framework & Generative AI Profile

🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies. 🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management. 📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias. 🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies. 🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices. 🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions. 🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

May 6, 202444 min

Ep 35The EU AI Act: Prohibited and High-Risk Systems and why you should care

In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike. Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU. The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem. Key Topics Covered: Overview of the EU AI Act and its journey to enactment Differentiating prohibited and high-risk AI systems Understanding biases in AI algorithms and their implications Compliance challenges and the importance of early action How BABL AI supports organizations in achieving compliance and building trust Why You Should Tune In: Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance. Don't Miss Out: Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Apr 8, 202425 min

Ep 34Live Webinar Q&A Recording: Finding Your Place in AI Ethics Consulting

Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies. In This Episode: Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI. Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm. Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics. Live Q&A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting. Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche. Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits. Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams. Key Takeaways: The essential blend of skills needed in AI ethics consulting. Insights into the challenges and opportunities in the field of AI ethics. Practical advice for individuals looking to enter or pivot into AI ethics consulting. Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey. Listeners can use coupon code "FREEFEB" to get our "Finding Your Place in AI Ethics Consulting" course for free. Link on our Website. Lunchtime BABLing listeners can use coupon code "BABLING" to save 20% on all our course offerings. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Mar 18, 202459 min

Ep 33NIST, ISO 42001, and BABL AI online courses

Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI. What's Inside: 1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications. 2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry. 3. Aligning Education with Innovation: We also explore how BABL AI’s online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 19, 202410 min

Ep 32Navigating Global AI Regulatory Compliance

Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024. 🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally. 🔍 Highlights of This Episode: EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements. Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 🚀 Takeaway: This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical. 👉 Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge. #AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligence Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Feb 5, 202411 min

Ep 31Exploring the socio-technical side of AI Ethics (Re-uploaded) | Lunchtime BABLing .07

Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING." Link here: https://babl.ai/courses/ 🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects. 🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence. 🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives. Link to paper here: https://arxiv.org/abs/2209.00692 💡 Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics. 👍 Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen. 📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going! 🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads. #LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthics Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 29, 202450 min

Ep 30What Companies Need To Consider When Implementing AI

📺 About This Episode: Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about "What Things Should Companies Consider When Implementing AI." Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape. In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain. Whether you're a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice. 🔗 Stay Connected: Hit that like and subscribe button for more enlightening episodes. Tune into our podcast across various platforms for your on-the-go AI insights. 👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can't wait to share more in our upcoming episodes! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 22, 202432 min

Ep 29Key Takeaways of the EU AI Act | Lunchtime BABLing

Description: 🔊 Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, "Key Takeaways of the EU AI Act," join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act. 🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It's comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act's implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation. 🔑 Highlights of the episode include: A detailed explanation of what the EU AI Act entails and why it's a game-changer. Insights into who will be affected by the Act and how it extends beyond European borders. The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories. A look into the conformity assessment process and the compliance requirements for organizations. Practical steps organizations should take to prepare for compliance. 🤔 Whether you're a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences. 📣 Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don't forget to like and subscribe if you're watching on YouTube, or thank you for listening if you're tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing! #EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #Podcast Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 15, 202425 min

Ep 28028. International Association of Algorithmic Auditors

Lunchtime BABLing listeners can use Coupon Code "BABLING" to save an 20% off all BABL AI courses. Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification Description: Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA). Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association's goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society. The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process. As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors. Whether you're a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing. IAAA website: https://iaaa-algorithmicauditors.org 🎙️ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don't forget to like and subscribe for more insightful discussions on Lunchtime BABLing! #AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLing Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Jan 8, 202431 min

Ep 27027. Understanding Fundamental Rights Impact Assessments in the EU AI Act

Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained Description: Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, "Understanding Fundamental Rights Impact Assessments in the EU AI Act," is a must-listen for anyone interested in the intersection of AI, regulation, and human rights. Key Discussion Points: Introduction to the EU AI Act: Gain insights into the EU AI Act's passing and its significance in shaping the future of AI regulation. Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments. Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems. Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes. Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation. Episode Highlights: Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses. Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape. Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Dec 18, 202338 min

Ep 26026. National Conference on AI Law, Ethics, and Compliance

🔹 New Episode: National Conference on AI Law, Ethics, and Compliance In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance. Key Discussions: -Understanding AI and the risks involved. -Governance frameworks for AI deployment. -The implications of the recent U.S. Executive Order on AI. -Global initiatives for AI safety and governance. Industry Spotlight: -The surge of generative AI in corporate strategy. -The evolving landscape of AI policy, privacy concerns, and intellectual property. Engage with Us: Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: “BABLING” Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Nov 6, 202310 min

Ep 25025. AI and Algorithm Auditing Certificate

Lunchtime BABLing is back with a new season! In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: "BABLING" Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: https://babl.ai/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Oct 24, 20238 min

Ep 24024. Interview with Khoa Lam on AI Auditing

On this week's Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. They discuss a wide range of topics including: 1: How Khoa got into the field of Responsible AI 2: His work at AI Incident Database 3: His thoughts on generative AI and large language models 4: The technical aspects of AI and Algorithmic Auditing Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/ AI Incident Database: https://incidentdatabase.ai BABL AI Courses: https://courses.babl.ai/ Website: https://babl.ai/ Linkedin: https://www.linkedin.com/company/babl-ai/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

May 8, 202355 min