
EP 148: Safer AI - Why we all need ethical AI tools we can trust
Everyday AI Podcast – An AI and ChatGPT Podcast · Everyday AI
About this episode
Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Mark Surman and Jordan questions about AI safetyUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: [email protected]
Connect with Jordan on LinkedInTimestamps:[00:01:05] Daily AI news[00:03:15] About Mark and Mozilla Foundation[00:06:20] Big Tech and ethical AI [00:09:20] Is AI unsafe?[00:11:05] Responsible AI regulation[00:16:33] Creating balanced government regulation[00:20:25] Is AI too accessible?[00:23:00] Resources for AI best practices[00:25:30] AI concerns to be aware of[00:30:00] Mark's final takeawayTopics Covered in This Episode:1. Future of AI regulation2. Balancing interests of humanity and government3. How to make and use AI responsibly 4. Concerns with AIKeywords:AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interestsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com Also, here's a link to the entire series on a Spotify playlist.