PLAY PODCASTS
In AI We Trust?

In AI We Trust?

125 episodes — Page 2 of 3

Ep 75Sarah Hammer (Wharton School) and Dr. Philipp Hacker (European University Viadrina): Can AI accelerate the UN Sustainable Development Goals (SDGs)?

Professor Sarah Hammer, Executive Director at the Wharton School of the U. of Penn and leads Wharton Cypher Accelerator and Dr. Philipp Hacker, Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies at European University join this week on In AI We Trust? to debrief their recent #AIforGood Conference. Listen to the discussion for insights on how financial regulation, sustainability in AI, content moderation, and other opportunities for international collaboration around AI will help advance UN SDG goals.—Resources Mentioned This Episode:AI for Good Global SummitAI for Good Global Summit 2023: Input Statement by Professor Philipp HackerRegulating ChatGPT and other Large Generative AI ModelsThe European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the FutureTeaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU LawSustainable AI RegulationLegal and technical challenges of large generative AI modelsRegulating ChatGPT and other Large Generative AI Models

Jul 26, 202359 min

Ep 74Chair Charlotte Burrows (EEOC): Is your AI system violating civil rights laws?

In this week’s episode, we are joined by Chair of the U.S. Equal Employment Opportunity Commission (EEOC) Charlotte Burrows, who highlights the EEOC’s work to address AI proliferation in the employment sphere. She discusses the need to increase education of the public on how AI is being used, EEOC guidance on key civil rights bills such as the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act of 1964 (Title VII), as well as key points employers should be aware of when deploying AI.

Jul 12, 202345 min

Ep 73Kevin McKee (DeepMind): How does AI influence the core of being human?

Tune in to this week’s episode of In AI We Trust?, where Kevin McKee, Senior Research Scientist at Google DeepMind, discusses issues of AI fairness, AI’s impact on the LGBT+ community, and the balance between developing AI that humans can trust and the anthropomorphization of technology. Kevin leads research projects focused on machine learning, social psychology, and sociotechnical systems and has worked on algorithmic development and evaluation, environment design, and data analysis.—Resources Mentioned this Episode:Humans may be more likely to believe disinformation generated by AICountries Must Act Now Over Ethics of Artificial IntelligenceOnline hate and harassment continues to rise

Jul 5, 202335 min

Ep 72Chris Wood (LGBT Tech): How can we ensure our LGBT+ voices are heard through our data?

This week on In AI we Trust? Executive Director of LGBT Tech, Chris Wood, joins Miriam Vogel and guest-co-host Kathy Baxter for a special episode in celebration of Pride Month. Join this week’s conversation on the duality of technology for the LGBT community – how it can be an impactful medium to foster connection in the LGBT+ community or a harmful tool leveraged against the same individuals – the significance of diversity in tech, the complexity of representation in our datasets, as well as his important research and other initiatives that range from broadband access in rural communities to building an AI of their own. —Resources Mentioned this Episode:LGBT Tech WebsiteVision For Inclusion: An LGBT Broadband FutureLGBT Tech Programs

Jun 28, 202345 min

Ep 71Gilman Louie (America’s Frontier Fund, CEO of In-Q-Tel, NSCAI Comm’r): How will we respond to this ‘Sputnik’ moment?

Gilman Louie is CEO and co-founder of America’s Frontier Fund, CEO of In-Q-Tel, and an NSCAI Commissioner. Tune into this week’s episode of In AI We Trust, where Gilman shares his thoughts on the government's role in regulating, funding, and convening key stakeholders to promote responsible AI. Gilman invokes similar moments of technological innovation in our history to contextualize the opportunity in the U.S. at this moment to set the standards in the AI race; and considers the challenges that derive from our “click economy”. Hear these thoughts and more in this great episode.

May 24, 202353 min

Ep 70Rep. Chrissy Houlahan (D-PA): How do we prepare Congress for the age of AI?

Meet one of the Bad A#%* women in Congress, Representative Chrissy Houlahan (D-PA). She is a trailblazer: a strong advocate for and accomplished practitioner in STEAM (science, technology, engineering, art and math) as an engineer, Air Force veteran, successful entrepreneur and former chemistry teacher. This week on In AI We Trust? Miriam Vogel and special guest co-host Victoria Espinel of #BSA ask Representative Houlahan to share her unique perspective on why – and how – Congress must do more to support our veterans, women, entrepreneurship and how this relates to her work in Congress on AI policy.

May 3, 202330 min

Ep 69Dr. Haniyeh Mahmoudian (DataRobot): Who should be involved in AI ethics?

In this episode of In AI We Trust? Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, provides insight into the timely and critical role of an AI ethicist. Haniyeh explains how culture is a key element of responsible AI development. She also reflects on the questions to ask in advance of designing an AI model and the importance of engaging multiple stakeholders to design AI effectively. Tune in to this episode to learn these and other insights from an industry thought leader.—Resources mentioned in this episode:How to Tackle AI Bias (Haniyeh Mahmoudian, PhD)

Apr 26, 202341 min

Ep 68Justin Hotard (Hewlett Packard Enterprise): Are local communities and data the key to unlocking better AI?

Justin Hotard leads the High Performance Computing (HPC) & AI business group at Hewlett Packard Enterprise (HPE). Tune in to In AI we Trust? this week as he discusses supercomputing, HPE’s commitment to open source models for global standardization and using responsible data to ensure responsible AI. –Resources mentioned in this episode:What are supercomputers and why are they important? An expert explains (Justin Hotard & the World Economic Forum)Fueling AI for good with supercomputing (Justin Hotard & HPE)Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the edge and distributed sites (HPE)

Apr 5, 202340 min

Ep 67Jordan Crenshaw (U.S. Chamber of Commerce): Can your company survive without AI adoption?

Based on the testimony of 87 witnesses from 5 field hearings across the US, the U.S. Chamber of Commerce bipartisan AI Commission on Competition, Inclusion, and Innovation released a report yesterday, addressing the state of AI. Tune in this week to hear the U.S. Chamber’s Technology Engagement Center (C_TEC) VP, Jordan Crenshaw share key takeaways from this and other recent C_TEC reports, why tech issues are business issues, the importance of digitizing government data, and the critical impact of tech on small businesses. —Materials mentioned in this episode:The U.S. Chamber’s AI Commission report (U.S. Chamber of Commerce)Investing in Trustworthy AI (U.S. Chamber of Commerce & Deloitte)U.S. Chamber Artificial Intelligence Principles (U.S. Chamber of Commerce)Impact of Technology on U.S. Small Businesses (U.S. Chamber of Commerce Technology Engagement Center)

Mar 10, 202338 min

Ep 66Elham Tabassi and Reva Schwarz (NIST): What's the big deal about the NIST AI Risk Management Framework (AI RMF)?

Elham Tabassi and Reva Schwartz – two AI leaders from the National Institute of Standards and Technology (NIST) – join us this week to discuss the AI Risk Management Framework #AIRMF released on January 26th thanks to the herculean efforts of our guests. Tune in to find out why Miriam Vogel and Kay Firth-Butterfield believe the AI RMF will be game changing. Learn the purpose behind the AI RMF; the emblematic 18-month multi (multi)-stakeholder, transparent process to design it; how they made it ‘evergreen’ at a time when our AI progress is moving at a lightning speed pace and much more.—Materials mentioned in this episode:AI Risk Management Framework, (NIST)NIST AI Risk Management Framework Playbook, (NIST)Perspectives about the NIST Artificial Intelligence Risk Management Framework, (NIST)

Feb 6, 202350 min

Ep 65Davos in Review: Should we hit 'pause' on generative AI?

The annual World Economic Forum (WEF) at Davos gathers leading thinkers in government, business and civil society annually to discuss current global economic and social challenges. This week, listen to WEF Executive Committee Member, our own co-host Kay Firth-Butterfield, and Miriam Vogel discuss why this was Kay's “best Davos yet”. Not surprisingly, generative AI and ChatGPT were among the hottest topics. Learn insights gleaned on generative AI’s power and limitations, the key role that investors plan in development and deployment of responsible AI, and how AI can predict wildfires and help fight the climate crisis. Leave a 5 star rating!—Davos discussions and materials mentioned in this episode:A Conversation with Satya Nadella, CEO of MicrosoftGenerative AIInvesting in AI, with CareAI for Climate AdaptationHow AI Fights WildfiresSatya Nadella Says AI Golden Age Is Here and ‘It’s Good for Humanity’These were the biggest AI developments in 2022. Now we must decide how to use them, (Kay Firth-Butterfield)

Feb 2, 202334 min

Ep 64Dr. Stuart Russell (UC Berkeley): Are we living in an AGI world?

Dr. Stuart Russell (CS Prof, UC Berkeley) has kept us current on AI developments for decades and in this week’s episode, prepares us for the headlines we’ll hear about this week @Davos and in the coming year. He shares his thoughts and concerns on ChatGPT, Lethal Autonomous Weapons Systems, how the future of work might look through an AI lens, and a human compatible design for AI. Listen to this episode here and subscribe to ensure you catch other important upcoming discussions.—Materials mentioned in this episode:Davos 2023, the World Economic ForumRadio Davos, A World Economic Forum Podcast

Jan 18, 202351 min

Ep 632022 Year in Review: Are we ready for what’s coming in AI?

In this special year-in-review edition of "In AI we Trust?", co-hosts Kay Firth-Butterfield (@KayFButterfield) and Miriam Vogel (@VogelMiriam) take a look back at the key themes and insights from their conversations. From interviews with thought leaders, government officials and senior executives in the field, we explore progress and challenges from the past year in the quest for trustworthy AI. We also look ahead to what you can expect to see and encounter, including key issues that are likely to emerge in AI in 2023. Join us as we reflect and gear up for an exciting year in the accelerated path toward game-changing and responsible AI.—Materials mentioned in this episode:Davos 2023, the World Economic ForumA 72-year-old congressman goes back to school, pursuing a degree in AI, The Washington PostBoard Responsibility for Artificial Intelligence Oversight, Miriam Vogel and Robert G. Eccles, Harvard Law School Forum on Corporate Governance5 ways to avoid artificial intelligence bias with 'responsible AI', Miriam Vogel and Kay Firth-Butterfield

Jan 11, 202333 min

Ep 62Dr. Suresh Venkatasubramanian (White House OSTP/Brown University): Can AI be as safe as our seatbelts?

In this episode, we are joined by Dr. Suresh Venkatasubramanian, a former official at the White House Office of Science and Technology Policy (OSTP) and CS professor at Brown, to discuss his work in the White House developing policy, including the AI Bill of Rights Blueprint. Suresh also posits on the basis for current AI challenges as failure of imagination, the need to engage diverse voices in AI development, and the evolution of safety regulations for new technologies. —Materials mentioned in this episode:Blueprint for an AI Bill of Rights (The White House)

Dec 19, 202246 min

Ep 61Joaquin Quiñonero Candela (LinkedIn): Can we meet business goals AND attain responsible AI? (spoiler: we can and must)

This week, Joaquin Quiñonero Candela (LinkedIn, formerly at Facebook and Microsoft) joins us to discuss AI storytelling; ethics by design; the imperative of diversity to create effective AI; and strategies he uses to make responsible AI a priority for the engineers he manages, policy-makers he advises, and other important stakeholders.—Materials mentioned in this episode:Technology Primer: Social Media Recommendation Algorithms (Harvard Belfer Center)Finding Solutions: Choice, Control, and Content Policies; a conversation between Karen Hao and Joaquin Quiñonero Candela hosted live by the Harvard Belfer Center

Dec 7, 202243 min

Ep 60Deputy Secretary Graves (DOC) answers the question: Can We Maintain Our AI Lead? (spoiler alert: We are AI Ready!)

The Department of Commerce plays a key role in the USG’s leadership in AI given the multiple ways AI is used, patented and governed by the Department. In this special episode, hear from Commerce Deputy Secretary Don Graves on how the US intends to maintain leadership in AI, including through its creation of standards to attain trustworthy AI, working with our allies and ensuring an inclusive and ready AI workforce. —Materials mentioned in this episode:Proposed Law Enforcement Principles on the Responsible Use of Facial Recognition Technology Released from the World Economic Forum Artificial Intelligence: Detecting Marine Animals with Satellites (NOAA Fisheries)

Nov 16, 202238 min

Ep 59Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?

Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world’s largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart From Commercial AI” (Northrop Grumman)Smart Toys (World Economic Forum): Smart Toy Awards

Nov 2, 202244 min

Ep 58Mark Brayan (Appen): For whom is your data performing?

In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

Oct 12, 202228 min

Ep 57Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?

Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are increasingly taking audits into their own hands.— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

Sep 28, 202244 min

Ep 56Dr. Edson Prestes: Can we ingrain empathy into our AI?

Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

Sep 14, 202245 min

Ep 55Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?

Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluating our systems; and asks how much “rat poison” is tolerated in our cereal (AI systems).— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

Aug 24, 202252 min

Ep 54Dr. Richard Benjamins (Telefonica): What are the key ingredients for a successful Responsible AI Framework?

Dr. Richard Benjamins is Chief AI & Data Strategist at Telefonica, author of The myth of the algorithm and A Data-Driven Company, and co-founder of OdiseIA. In this week’s episode, Richard offers his roadmap for trustworthy AI, including his company's “aspirational” approach to AI governance, their use of an ethics committee, how they use the bottom line to reinforce their goals and other best practices in designing responsible AI use.

Jun 15, 202258 min

Ep 53Beena Ammanath (Deloitte): What concrete steps companies can (must) take to achieve trustworthy AI

Beena Ammanath is Executive Director of the Global Deloitte AI Institute, author of Trustworthy AI: A Business Guide For Navigating Trust and Ethics in AI and founder of the nonprofit to increase diversity in tech, Humans for AI. In this episode, Beena explains where organizations (and others) can begin to embed AI ethics as a part of their routine business practice, the importance for policy makers and organizations alike to focus on use cases when building frameworks, and shares others lessons on how to ensure we create more inclusive, trustworthy AI.

May 27, 202251 min

Ep 52Dr. Margaret Mitchell: How can we ensure AI reflects our values – and why this matters to each of us?

Dr. Margaret Mitchell is a renowned researcher who has won numerous awards for her work developing practical tools to combine ethics and machine learning. LastFall, Dr. Mitchell joined the AI startup HuggingFace ( "to democratize good machine learning") and previously research positions at Google and Microsoft. Inthis episode, Dr. Mitchell articulates numerouschallenges in the endeavor to create ethical AI. She also illuminates thedistinction between ethical and responsible AI; the necessity of ahuman-centered, inclusive approach to AI development; and the need for policymakers to understand AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal

May 10, 202256 min

Ep 51Rep. Don Beyer (D-VA): Can the U.S. Congress Create Legislative Frameworks to Support AI Development (and should it)?

Rep. Don Beyer (D-VA) is Chair of Congress' Joint Economic Committee and serves on the Ways and Means and the Science, Space and Technology Committees, as well as a member of the AI Caucus- and in his spare time, he is pursuing a Masters Degree in Artificial Intelligence. In this episode, Rep. Beyer explains his enthusiasm for AI and the opportunities it presents to enhance human life -- (e.g., better understanding and treating long covid and preserving life in suicide prevention)-- and the potential harms he is concerned about, as well as the ability of the US Congress to appropriately address these challenges.

Apr 26, 202236 min

Ep 50Mira Lane (Microsoft): Can compassion lead to better AI?

Mira Lane, a a polymath, technologist and artist, is the head of Ethics & Society at Microsoft, a multidisciplinary group responsible for guiding AI innovation that leads to ethical, responsible, and sustainable outcomes. In this episode, she shares how the culture at Microsoft includes compassion in AI development to the benefit of their AI products, how she changes the perception of responsible AI from a tax to a value-add and how games can play a role in achieving this goal.----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal

Apr 14, 202247 min

Ep 49Dr. Athina Kanioura (PepsiCo): Is AI a Privilege Reserved for Big Tech?

Dr. Athina Kanioura is Chief Strategy and Transformation Officer at PepsiCo, leading their company-wide transformation in digital strategy. In this episode, Athina opens our eyes to ways that companies like PepsiCo are using AI (and equally important, where they are not). She shares challenges in undergoing a digital transformation and explains their legacy-focused approach to AI integration as a means for greater efficiency as well as instilling better sustainability practices, upskilling employees and supporting small business partners.

Apr 8, 202247 min

Ep 48Keith Sonderling, EEOC Commissioner: Does AI scale or reduce bias in the workplace?

Keith Sonderling is a Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC) and helped launch the EEOC's unprecedented Initiative on Artificial Intelligence and Algorithmic Fairness in 2021. In this episode, he shares guidance for employers on building, buying and employing AI programs in HR systems and shares his optimism on the unique opportunity we have at this moment to ensure a significant, positive impact in deploying AI technology.Subscribe to catch each new episode! Find us on Apple(https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Amazon, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true). Find the referenced WEF Toolkit on Artificial Intelligence for Human Resources here: https://www.weforum.org/reports/human-centred-ai-for-hr-state-of-play-and-the-path-ahead#report-nav).

Mar 31, 202251 min

Ep 47MP Darren Jones: 'Horizon Scanning' to Design Better AI Regulation

Darren Jones is Member of UK Parliament who has chaired the Parliamentary Technology Information and Communications Forum, Parliamentary Commission on Technology Ethics, and Labour Digital. Darren is also the founding chair of the Institute of AI, a global coalition of legislators interested in AI, and he is a member of the World Economic Forum (WEF) Global AI Action Alliance (GAIAA). In this episode, Darren speaks to how legislators need to 'horizon scan' and understand cutting edge tech to translate it into creating more opportunities while reducing risk through laws and regulation. He argues regulation can support 'safety by design', instead of the tendency to retrofit AI trust and safety considerations at the end of the process. Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true)

Mar 17, 202240 min

Ep 46Ziad Obermeyer: A physician, academic, McKinsey alum's approach to tackling bias in AI

Ziad Obermeyer is a Professor of Health Policy and Managementat the UC Berkeley School of Public Health where he conducts research at the intersection of machine learning, medicine, and health policy. Previously, he was a professor at Harvard Medical School and consultant at McKinsey & Co. He continues to practice emergency medicine in underserved parts of the US and is also a co-founder of Nightingale Open Science, a computing platform givingresearchers access to massive new health imaging datasets. In this episode, you'll hear how he ended up co-authoring the seminal study to identify bias in AI health systems, published in Science in 2019, and whether you should be using his Algorithmic Bias Playbook.Links to referenced articles and playbook: http://ziadobermeyer.com/research/https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias

Mar 9, 202259 min

Ep 45Jen Gennai (Google): How to Manage the Creation of Responsible AI Products for Billions

Jen Gennai is Founder and Director of the Responsible Innovation Group at Google. In her current role leading the Responsible Innovation Group, Jen and her team are responsible for creating and operationalizing Google’s AI Principles. In this episode, Jen shares what responsible AI means to her, lessons learned that inform her perspective from which we all can learn, how AI should or should not be regulated, and the AI innovations on the horizon she is excited to see come to fruition.

Mar 4, 20221h 8m

Ep 44Ilana Golbin (PwC): Does sci-fi help or hinder AI understanding?

Ilana Golbin is a Director in PwC Labs leading projects on Emerging Technology and AI. She is a Certified Ethical Emerging Technologistand was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. In this episode, Ilana shares the principles she uses to ensure confidence in AI systems used both internally, at PWC, and when advising clients. She explains some of the complexitiesin the application of those principles, how responsible AI governance is part of a demonstration of cultural sensitivity and how sci-fi can be a helpful partner in the governance process. ----- Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151),Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true).

Feb 24, 202246 min

Ep 43Renee Cummings: How AI Does (& Should) Impact Our BHM Celebration

Renée Cummings is a pioneering AI ethicist, Criminologist, Columbia University Community Scholar, and Founder of Urban AI. Her studies focus on the impact of AI on criminal justice, specifically in communities of color and incarcerated populations. In this episode, you will be inspired by Renee's insights on the impact that AI and data science has on our civil rights, how increasing diversity in AI is fundamental to creating technology that reflects our humanity, and improvements that still need to be made in areas such as trust and accountability. ----- Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true)

Feb 17, 202252 min

Ep 42Marco Casalaina (Salesforce): Techno-Optimist not a Techno-Chauvinist

In this episode, MarcoCasalaina, Salesforce’s SVP of Product Management and GM of Einstein, explains how his decades of experience in AI and tech has resulted in his techno-optimism, how an AIethicist enhances his work and why he encourages others to join the EqualAI badge program. He also shares his excitement about rapidly developing transforming models but illuminates how this technology will be the next gen ethical AI quandary.-----Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151),Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true).

Feb 11, 202242 min

Ep 41Mukesh Dalal, Stanley Black & Decker (SBD): Why all companies will need a Chief AI Officer

As the Chief AI officer at Stanley Black and Decker (SBD), Mukesh Dalal has helped transform a 178-year-old global manufacturing company’s approach to AI with the vision of delivering $1 billion of value to the company through AI and Analytics. In this week’s episode, Mukesh outlines SBD’s forward-thinking strategy on AI, describes SBD’s journey into the responsible AI space, and foresees that soon all major companies will have Chief AI Officers to harness the business potential and root out risks of AI technology. Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.

Feb 4, 202232 min

Ep 40Meghna Sinha: Why data scientists are like medical professionals; why ignorance is not an option and steps we all must take when making data-based decisions

Meghna Sinha is the Vice President of AI and Data at Verizon’s AI Center. Before joining Verizon, Meghna was Target’s VP of Data Sciences. In this episode, Meghna posits that data scientists are similar to medical practitioners, affirms that AI must start and end with humans, and shares lessons from the EqualAI Badge Program for Responsible AI Governance. Referenced papers/articles can be found here: WEF Toolkit (https://www3.weforum.org/docs/WEF_Empowering_AI_Leadership_2022.pdf) @VogelMiriam & Robert Eccles article on AI as a necessary part of Board governance (https://corpgov.law.harvard.edu/2022/01/05/board-responsibility-for-artificial-intelligence-oversight/) Meghna’s article offering tips to women in tech: https://www.verizon.com/about/news/top-tips-women-tech ----- Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal and LinkedIn (https://www.linkedin.com/company/equalai/?viewAsMember=true).

Jan 28, 202239 min

Ep 39In AI we Trust welcomes its acclaimed new cohost, Kay Firth-Butterfield of the World Economic Forum

Kay Firth-Butterfield is a leader in AI governance. Her deep and wide-ranging experience as an entrepreneur, barrister, judge, and now as Head of Artificial Intelligence and Machine Learning and member of the Executive Committee at the World Economic Forum, has established Kay as an internationally recognized expert on the subject. Her numerous titles and awards include being featured in the New York Times as one of 10 Women Changing the Landscape of Leadership. In this episode, Miriam Vogel interviews her co-host, Kay Firth-Butterfield on her long-time commitment to exploring how humanity can equitably benefit from new technologies.

Jan 18, 202238 min

Ep 38A year in review: the path toward responsible AI in 2021 (and farewell to Mark episode)

In this episode, cohosts Miriam Vogel and Mark Caine share the conversations and highlights that have inspired them in 2021 and predict what we can expect to see in this space in 2021. We also bid farewell to Mark as he departs the World Economic Forum and takes on new adventures.

Jan 7, 202224 min

Ep 36Amy Holcroft: How HPE is "living it in action" how the EqualAI Badge program, in collaboration with WEF, has helped this effort

Amy Holcroft is the Chief Privacy Officer and VP of Privacy & Info Governance at Hewlett Packard Enterprise (HPE). In this episode, Amy shares how she co-leads the establishment of HPE’s AI Ethics Advisory board and HPE’s AI Ethical Principles. She shares that her work requires resilience and thoughtful governance, and how participation in the EqualAI Badge Program on Responsible AI Governance, in collaboration with the World Economic Forum, supports her and her work at HPE, including a timely session with Cathy O'Neil that she put to good use immediately. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal

Dec 9, 202130 min

Ep 35Seth Dobrin: How do you establish a human-centered approach to data and AI (and why is this necessary to succeed)?

Seth Dobrin is the Global Chief AI Officer of IBM. Seth has spent his career scaling and using existing technologies to address previously intractable problems at scale. In this episode, Seth shares concrete steps he has taken to create a more diverse and trust-based workplace, explains how his PhD in genetics is relevant and helpful to his current work in AI, and breaks down the what, why and how of a human-centered approach to AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal

Nov 11, 202147 min

Ep 34Kat Zhou: What is the role of design in creating inclusive and equitable AI?

Kat Zhou is a product designer focusing on integrating ethics into the design of AI systems. She is a leading voice for more inclusive and privacy-respecting approaches to AI, and she has called for greater regulation of AI and more human-centric business models for AI companies. In this episode, we ask Kat how governments, product designers, and corporate decision makers can minimize the harms of AI products – and whether there are any products that should never be developed to begin with. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Nov 3, 202134 min

Ep 33Alex Kotran: Who needs AI literacy and how can we accelerate it?

Alex Kotran is the co-founder and CEO of the AI Education Project, a non-profit that brings AI-related knowledge and skills to communities that are being impacted by AI and automation. In this episode, Alex highlights how the communities that are most impacted by AI are often the ones with the least access to basic AI knowledge, and how this is creating disparities in access to healthcare, financial services, criminal justice, and more. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Oct 28, 202128 min

Ep 32Meg King of the Wilson Center: Who is ensuring policy makers are able to "speak AI"?

Meg King is the Director of the Science and Technology Innovation Program at the Wilson Center; a non-partisan think tank created by Congress. She leads innovative transnational projects examining the development of emerging technology and related policy opportunities. Her program also provides training seminars for Congressional and Executive branch staff to develop technology knowledge and skills. In this episode, Meg shares context on her recent congressional testimony, the goals of her work at the Wilson Center and lessons we can learn from AI frameworks and policies abroad. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Oct 22, 202130 min

Ep 31David Hardoon: Can AI be ethical?

In this episode, we speak with David Hardoon, a self-proclaimed "data artist." He leads Data and AI efforts at UnionBank Philippines and serves as an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB). David has extensive experience in both industry and academia with a PhD in Computer Science and B.Sc. in Computer Science and AI. He weighs in on both the high level concepts surrounding ethics and AI and offers practical steps he uses to support ethical governance with the AI systems under his purview. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Oct 14, 202146 min

Ep 30Rep. Yvette Clarke: Why is AI regulation necessary during this time of racial reckoning?

Find out on this week's episode with special guest Congresswoman Yvette Clarke (NY-9th) why she makes AI a top priority in her work to protect vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Oct 6, 202133 min

Ep 29Elham Tabassi of NIST: Who ensures the U.S. has strong metrics, tools, & standards for responsible AI?

Observers have been skeptical about the ability of the US to lead in AI and establish the necessary framework to ensure its safe and effective development. NIST – the National Institute of Standards and Technology – is responding to that call. In this episode, we speak with Elham Tabassi who is leading NIST's work to support safe and effective Artificial Intelligence. Elham the Chief of Staff in the Information Technology Laboratory (ITL) and serves on the National AI Research Resource Task Force, announced by the White House and the National Science Foundation (NSF) last June. Learn what makes NIST's 'secret sauce' for impactful work (spoiler: it involves you) and participate in the discussion through upcoming workshops and listening sessions: https://www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development-request-information ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Sep 28, 202148 min

Ep 28Taka Ariga and Stephen Sanford: What is the U.S. GAO's AI Framework?

Taka Ariga is the first Chief Data Scientist and Director of the Innovation Lab at the U.S. Government Accountability Office (GAO). Stephen Sanford is the Managing Director in GAO’s Strategic Planning and External Liaison team. Taka and Stephen are the authors of the GAO's recently released AI Framework, one of the first resources provided by the U.S. government to help identify best practices and the principles to deploy, monitor and evaluate AI responsibly. In this episode, we ask the AI Framework authors why they took on this initiative and lessons learned that are broadly applicable across industry. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Sep 23, 202143 min

Ep 27Vilas Dhar: How can civil society shape a positive, human-centric future for AI?

Vilas Dhar is a technologist, lawyer, and human rights advocate championing a new social compact for the digital age. As President and Trustee of the Patrick J. McGovern Foundation, he is a global leader in advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all. In this episode we ask Vilas how he arrived at the intersection of AI and philanthropy, and how he thinks philanthropists and civil society can shape a more inclusive and societally beneficial future for AI. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Sep 16, 202140 min

Ep 26Steve Mills: How can companies walk the walk on responsible AI?

Steve Mills is a Partner at Boston Consulting Group (BCG), where he serves as Chief AI Ethics Officer and the Global Lead for Artificial Intelligence in the Public Sector. He has worked with dozens of leading companies and government agencies to improve their AI practices, and in this episode he shares some of the key lessons he has learned about how organizations can translate their ethical AI commitments into practical, meaningful actions. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Aug 24, 202135 min

Ep 25Julia Stoyanovich: Can AI systems operate fairly within complex, diverse societies?

Julia Stoyanovich is an Assistant Professor in the Department of Computer Science and Engineering at NYU’s Tandon School of Engineering, where she is also the Director of the Center for Responsible AI. Her research focuses on responsible data management and analysis and on practical tools for operationalizing fairness, diversity, transparency, and data protection in all stages of data acquisition and processing. In addition to conducting field-leading research and teaching, Professor Stoyanovich has written several comics aimed at communicating complex AI issues to diverse audiences. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Aug 18, 202142 min