PLAY PODCASTS
In AI We Trust?

In AI We Trust?

125 episodes — Page 3 of 3

Ep 24Oren Etzioni: Why is the term "machine learning" a misnomer?

Dr. Oren Etzioni is Chief Executive Officer at AI2, the Allen Institute for AI, a non-profit that offers foundational research, applied research and user-facing products. He is Professor Emeritus at University of Washington and a Venture Partner at the Madrona Venture Group. He has won numerous awards and founded several companies, has written over 100 technical papers, and provides commentary on AI for The New York Times, Wired, and Nature. In this episode, Oren explains why “machine learning” is a misnomer and some of the exciting AI innovations he is supporting that will result in greater inclusivity. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Aug 10, 202137 min

Ep 23Alexandra Givens: What makes tech to social justice issue of our time?

Alexandra Reeve Givens is the President & CEO of the Center for Democracy and Technology (CDT). She is an advocate for using technology to increase equality, amplify voices, and promote human rights. Previously, Alexandra served as the founding Executive Director of the Institute for Technology Law & Policy at Georgetown Law, served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee and began her career as a litigator at Cravath, Swaine & Moore. In this episode, Alexandra explains her unconventional path to the tech space as a lawyer and why she believes technology is the social justice issue of our time. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Aug 5, 202126 min

Ep 22Navrina Singh: How AI is a multi-stakeholder problem and how do we solve for it? (Spoiler: it's all about trust.)

Navrina Singh is the Founder & CEO of Credo AI, whose mission is to empower organizations to deliver trustworthy and responsible AI through AI audit and governance products. Navrina serves on the Board of Directors of Mozilla and Stella Labs. Previously she served as the Product leader focused on AI at Microsoft where she was responsible for building and commercializing Enterprise Virtual Agents and spent 12+ years at Qualcomm. In this episode, Navrina shares several insights into responsible AI, including the 3 key elements to building trust in AI and the 4 components of the "Ethical AI flywheel." ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jul 28, 202137 min

Ep 21Andrew Burt: How can lawyers be partners in the AI space?

Andrew Burt is a lawyer specializing in artificial intelligence, information security and data privacy. He co-founded bnh.ai and serves as chief legal officer of Immuta. His work has been profiled by magazines like Fast Company and his writing has appeared in Harvard Business Review, the New York Times and the Financial Times. In this episode, we explore the 'hype cycle' of AI where risks are overlooked and the appropriate role of a lawyer as a partner in this space. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jul 22, 202138 min

Ep 20Anima Anandkumar: How can the intersection of academia and industry inform the next generation of AI?

Anima Anandkumar is an accomplished AI researcher in both academia and in industry. She is the Bren professor at Caltech CMS department and director of machine learning research at NVIDIA. Previously, Anima was a principal scientist at Amazon Web Service, where she enabled machine learning on the cloud infrastructure. Anima is the recipient of numerous awards and honors and has been featured in documentaries and articles by PBS, Wired, MIT Technology review, Forbes and many others. In this episode we learn about the “trinity of the deep learning revolution,” how the next generation of AI will bring the “mind & body” together, and the detrimental impacts fostered by a lack of diversity in tech. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jun 30, 202139 min

Ep 19Vivienne Ming: How can we create AI that lifts society up rather than tearing it down?

Vivienne Ming is an internationally recognized neuroscientist and AI expert who has pushed the boundaries of AI in diverse areas including education, human resources, disability, and physical and mental health. In this episode, we ask Vivienne how we can ensure that society captures the benefits of AI technologies while mitigating their risks and avoiding harms to vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jun 24, 202158 min

Ep 18Heather Cox: Why did our company make a public commitment to equitable AI?

On this episode, we hear from Heather Cox, Chief Digital Health and Analytics Officer at Humana. Heather brings 25 years of experience to the role including having served as Chief Technology and Digital Officer at USAA, and CEO of Citi FinTech at Citigroup. In this episode, Heather shares why she decided Humana should take the EqualAI Pledge to Reduce Bias in AI and how they have restructured their company and partnerships to ensure their AI programs better serve their population, and adhere to the core principle to "do no harm". ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jun 15, 202140 min

Ep 17Commissioner Edward Santow: How can tech governance preserve human rights and achieve responsible AI?

On this episode, we are thrilled to share our conversation with Commissioner Edward Santow of the Australian Human Rights Commission. The Commission recently released the Human Rights and Technology final report, which makes 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education on AI. We ask him about lessons learned in the 3 year creation of this report and which recommendations are most universally applicable. Learn more about the report here: https://tech.humanrights.gov.au/ ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jun 8, 202140 min

Ep 16Tess Posner: How can we create a more inclusive technology industry?

Tess Posner is the CEO of AI4ALL, an organization working to make the technology industry more inclusive and to ensure that AI is developed responsibly. Before joining AI4ALL, she was Managing Director of TechHire at Opportunity@Work, a national initiative launched out of the White House to increase diversity in the tech economy. In this episode, we explore the diversity challenges facing the technology industry and the exciting efforts that AI4ALL is leading to empower diverse young people to join – and improve – one of the most powerful industries shaping society today. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Jun 1, 202132 min

Ep 15Sarah Drinkwater: What is 'Responsible AI' and why don't we have it?

Sarah Drinkwater is director of the Responsible Technology team at the Omidyar Network where she works to help technologists prevent, mitigate, and correct societal downsides of technology—and maximize positive impact. Priorto Omidyar Network, Sarah was head of Campus London, Google’s first physical startup hub. At Google, Sarah also built and led a global Google Maps community team. She also advised startups and large brands on their social strategy and was a journalist. On this episode we ask, "What is 'Responsible AI' and why don't we have it?" ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

May 25, 202130 min

Ep 14Tim O'Brien: Is there a role for me in building ethical AI?

In this episode we speak with Tim O'Brien who leads Ethical AI Advocacy at Microsoft. Before joining Microsoft in 2003, Tim worked as an engineer, a marketer and a consultant at startups and Fortune 500 companies. In this discussion, Tim leads us through Microsoft's journey – and his own – to become a leader in the field of AI ethics and answers the questions: what does an AI Ethicist do? And, is there a role for 'white guys' to play in this field? ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

May 18, 202139 min

Ep 13Aneesh Chopra: How would you grade the US government's tech readiness?

Aneesh Chopra served as the first Chief Technology Officer of the United States. He is currently the president of CareJourney, a provider of clinically-relevant analytics that builds a rating system of healthcare networks. He is also the co-founder of a data analytics investment group, Hunch Analytics. Aneesh sits on the Board of the Health Care Cost Institute, a non-profit focused on unbiased health care utilization and cost information. Previously, Aneesh served as Virginia’s Secretary of Technology and wrote of his experience in government and tech in his book "Innovative State: How New Technologies Can Transform Government." In this episode, Aneesh rates our current state of tech in the US and shares other insights rooted in his experience in state and federal government. (Spoiler: he is no grade inflationist!) ---- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

May 11, 202132 min

Ep 12Ashley Casovan: There are tools to help governments and companies reduce bias in AI

In this episode, we interview our friend and colleague, Ashley Casovan, Executive Director of Responsible AI Institute, formerly AI Global, a non-profit dedicated to creating practical tools to ensure the responsible use of AI. Previously, Ashley served as Director of Data and Digital at the Government of Canada, where she led research and policy development related to data, open source, and artificial intelligence. Ashley helps us answer the pressing question in AI: should we rely on internal corporate monitoring, government regulation, third party certification, or some combination? Learn more about the Responsible AI Institute: https://www.responsible.ai/ More information on their upcoming work: https://www.responsible.ai/Whitepaper.pdf ---- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

May 4, 202137 min

Ep 11Kush Varshney: Can we trust AI?

Dr. Kush R. Varshney is a distinguished research staff member and manager in IBM Research AI at the Thomas J. Watson Research Center where he has conducted cutting edge AI and Machine Learning research for the past ten years. Varshney also serves as co-director of IBM’s Science for Social Good program. Varshney received both a Masters in Science and a Ph.D. in electrical Engineering and Computer Science from MIT. In addition to writing numerous articles on AI, Varshney helped develop AI Fairness 360-a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models. Varshney recently released his book Trust in Machine Learning. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Apr 27, 202137 min

Ep 10Robert LoCascio: Why I co-founded EqualAI

Rob LoCascio is the founder of LivePerson, Inc. and has been its chief executive officer since its inception in 1995, making him one of the longest-standing founding CEOs of a tech company today. As the inventor of online chat for brands, Rob disrupted the way people communicate with companies around the world. He is a founding member of EqualAI, which works with companies, policy makers, and experts to reduce bias in AI, and the NYC Entrepreneurs Council of the Partnership for New York City. In 2001, Rob started the Dream Big Foundation with its first program, FeedingNYC. As someone who has been on the entrepreneurial journey for over two decades, Rob's mission is to inspire and help others who are on that same path. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Apr 20, 202132 min

Ep 9Malcolm Frank: How to advise your clients to successfully and responsibly navigate the digital age

Malcolm Frank is the president of digital business and technology at Cognizant. Malcolm’s influence is wide ranging and evident across media. He has co-authored two best-selling books, “What to Do When Machines Do Everything” (2017) and “Code Halos” (2014) and authored numerous white papers focusing on the Future of Work. A highly sought-after speaker, Malcolm has presented at conclaves across the globe, including the World Economic Forum and the South by Southwest (SXSW) Conference. He is frequently quoted, is the subject of a Harvard Business School case study and was named “one of the most influential people in finance” by Risk Management magazine. On this episode, we ask Malcolm how companies should navigate the digital age responsibly, with an eye toward increasing use of AI while reducing liability and harms. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Apr 13, 202137 min

Ep 8Christo Wilson: What is an algorithmic audit?

Christo is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University, a member of the Cybersecurity and Privacy Institute and the Director of the BS in Cybersecurity program in the College. He is a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and an affiliate member of the Center for Law, Innovation and Creativity at Northeastern University School of Law. His research investigates the sociotechnical systems that shape our lives using a multi-disciplinary approach. You can find more of his talks and cutting edge research here: https://cbw.sh/ ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Apr 7, 202128 min

Ep 7Judy Spitz: Do we have a pipeline problem?

Tune in to this week’s episode of "In AI we Trust?" to hear Dr. Judith Spitz, Founder and Executive Director of Break Through Tech and learn the often missed barrier she identified to getting women into tech and how she fixes it with her organization (hint: look in our own backyards). Dr. Spitz was previously Chief Information Officer (CIO) of Verizon, and in 2016, devoted herself to helping women break into tech. She launched WiTNY, or the Women in Technology and Entrepreneurship in New York Initiative, and saw a 94 percent increase in the number of women graduating with computer science degrees from CUNY (City University of New York), their academic partner. In 2020, she founded Break Through Tech, which is building on WiTNY’s success in cities around the country, starting with Chicago. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 30, 202143 min

Ep 6Kathy Baxter: What to do before launching an ethical AI product

Kathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 26, 202128 min

Ep 5BONUS with Rep. Yvette Clarke and Roger McNamee: "In AI We Trust?" Podcast Launch Hosted by the Georgetown University Law Center

bonus

We're excited to share this bonus episode, a recording from the podcast's launch event with special guests Roger McNamee and Congresswoman Yvette Clarke. Representing New York's 9th District, Congresswoman Clarke is a committed champion of fighting bias in AI and other forms of discrimination in tech. Roger McNamee is a longtime investor in tech and author of "Zucked," which sheds light on the dangers of tech that is unfettered and insufficiently regulated. A special thanks to our friends at the Georgetown University Law Center for hosting this event. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 24, 20211h 12m

Ep 4Meredith Broussard: What makes unfettered AI so dangerous (and what can we do about it)?

Meredith Broussard is a computer scientist and data journalism professor at NYU. Her book, "Artificial Unintelligence: How Computers Misunderstand the World," explains the origins of AI and the subtle and not-so-subtle ways that women and people of color were excluded from its genesis. On this episode, we ask Meredith: What makes unfettered AI so dangerous, and what can we do about it? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 16, 202139 min

Ep 3Bob Work: America is not prepared to compete in the AI-era

You can read the NSCAI report here: https://www.nscai.gov/2021-final-report/ Bob Work served as the Deputy Secretary of Defense from 2014 to 2017 and has a long history of service in the government and military before then. He is widely known for developing the Third Offset strategy. He is currently President of TeamWork, a consulting firm that specializes in national security affairs. And even more relevant to this discussion - he is the Vice-Chair of the National Security Commission on AI. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 11, 202133 min

Ep 2Cathy O'Neil: Why should companies care about ethical AI?

In this episode, Miriam and Mark are joined by Dr. Cathy. O'Neil, a mathematician, data scientist, and author. She is a matriarch in the exploding and significant field of algorithmic bias. Cathy has a Ph.D. in mathematics from Harvard taught at MIT and Barnard. She also founded and runs the algorithmic auditing company Orcaa. She also posted on her popular blog that you should all check out mathbabe.org and her Twitter @mathbabedotorg --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 9, 202137 min

Ep 1Kurt Campbell: China, International Diplomacy and AI

A farewell episode to EqualAI Advisor, Kurt Campbell asking: How can we apply effective diplomacy strategies to international governance of AI? Note: Mark officially joins the podcast next episode! --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Mar 9, 202143 min

Welcome to In AI We Trust?

trailer

The EqualAI podcast launches March 10th, subscribe wherever you get your podcasts so you don't miss out!To learn more, visit our website: https://www.equalai.org/You can also follow us on Twitter: @ai_equal

Mar 2, 20212 min