PLAY PODCASTS
How to Practice Responsible AI
Episode 35

How to Practice Responsible AI

From predictive policing to automated credit scoring, algorithms applied on a massive scale, gone unchecked, represent a serious threat to our society. Dr. Rumman Chowdhury, director of Machine Learning Ethics, Transparency and Accountability at Twitter, joins Azeem Azhar to explore how businesses can practice responsible AI to minimize unintended bias and the risk of harm.

Azeem Azhar's Exponential View

June 16, 202149m 15s

Audio is streamed directly from the publisher (afp-444457-injected.calisto.simplecastaudio.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

From predictive policing to automated credit scoring, algorithms applied on a massive scale, gone unchecked, represent a serious threat to our society. Dr. Rumman Chowdhury, director of Machine Learning Ethics, Transparency and Accountability at Twitter, joins Azeem Azhar to explore how businesses can practice responsible AI to minimize unintended bias and the risk of harm.

They also discuss:

  • How you can assess and diagnose bias in unexplainable “black box” algorithms.
  • Why responsible AI demands top-down organizational change, implementing new metrics and systems of redress.
  • How Twitter led an audit of its own image-cropping algorithm that was alleged to bias white faces over people of color.
  • The emerging field of “Responsible Machine Learning Operations” (MLOps).

@ruchowdh
@azeem
@exponentialview

Further resources:


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.