
The AI & Tech Society by Danar
AI, Technology, and Leadership: Exploring the Future of Society
Danar Mustafa
Show overview
The AI & Tech Society by Danar has been publishing since 2024, and across the 2 years since has built a catalogue of 110 episodes. That works out to roughly 35 hours of audio in total. Releases follow a weekly cadence, with the show now in its 4th season.
Episodes typically run ten to twenty minutes — most land between 8 min and 25 min — with run-times ranging widely across the catalogue. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-language Business show.
The show is actively publishing — the most recent episode landed yesterday, with 26 episodes already out so far this year. Published by Danar Mustafa.
From the publisher
AI, Technology & Leadership – Shaping the Future of SocietyStep into the future with a podcast that explores the shift from the industrial age to the digital era. We uncover how AI, robotics, data, and emerging technologies are transforming business strategy, leadership, and the role of humanity in a world driven by innovation.In every episode, you’ll discover:How artificial intelligence, robotics, and digitalization are redefining industries.The power of data-driven strategies in business, government, and public policy.The evolving role of leaders in navigating digital transformation.Who should listen:CEOs, CTOs, CIOs, AI product managers, startup founders, tech leaders, policymakers, and anyone passionate about innovation, leadership, and the future of work.From boardrooms to startups, we share the insights you need to lead in a data-first world. If you believe data is the new oil, this is your front-row seat to the trends shaping tomorrow.Host: Danar Mustafa, AI-leader & Founder based in Sweden.#digitalisering #digitaltransformation #industry4 #IoT #Analytics #AI #machinelearning #changemangement #strategy #businessmodel #digitalstrategy #agile #Genai #openai #google #meta #amazon #aws #microsoft #mistral #Sweden Hosted on Acast. See acast.com/privacy for more information.
Latest Episodes
View all 110 episodesGemma 4: Google's Open-Source LLM Competing with Chinese Models
Musk vs. Altman: The OpenAI Legal Battle Explained
AI cut 16,000 U.S. jobs a month — what the Goldman Sachs report actually says
Claude Mythos: The Model Anthropic Chose Not to Release
OpenAI's GPT-5.5: AI Agents Just Went Pro
Claude Opus 4.7: The Quiet Upgrade
US vs. China: The AI Race Is Closer Than You Think 2026
KPIs are Dead: The New Metric AI Companies are Using Instead in 2026

S4 Ep 18OpenAI’s Bold 7-Point Industrial Policy for the AI Age
Five Strategic TakeawaysDocument signals regulatory direction on access, taxation, worker protections, safetyFour-day week changes conversation about who benefits from AI efficiencyWorker voice emerging as both ethical imperative and operational best practiceFrontier AI compliance requirements are comingRead with both charity and skepticismThe Test of SincerityWatch for:Does OpenAI implement four-day week internally?Do they accept monitoring that constrains their development?Do they modify proposals based on criticism?Do they advocate for policies against their commercial interest? Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 17 The Anthropic Leak and What it Reveals About AI's Future
10-Component Prompt ArchitectureTask context (role/persona)Tone context (register)Background data (docs, code, guides)Detailed task description and rulesExamples (1-2 ideal outputs)Conversation historyImmediate task descriptionThink step-by-step instructionsOutput formattingPrefilled response (advanced)Strategic ImplicationsFor Developers:AI tools have more access than most employeesLeaked prompting framework is freely adoptableTreat "leaked code" repos as malwareFor Tech Leaders:Demand transparency on internal vs external differencesBuild dark code governance before incidentsApply vendor security assessment to AI toolsFor AI Strategy:Moat is model + trust, not harnessArchitecture secrecy is weak advantagePartial transparency worse than full transparency Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 16AI News Roundup March 2026: GPT-5.4, Nvidia GTC, EU AI Act & Top Startups
Your complete AI news roundup for March 2026 — covering GPT-5.4’s human-surpassing benchmark performance, Nvidia’s Rubin GPU reveal at GTC 2026, OpenAI’s $110B funding round, DeepSeek V4’s open-source launch, and the EU AI Act’s approaching August enforcement deadline. Includes the latest in AI robotics, healthcare breakthroughs, Swedish AI policy, startup investments, chip hardware updates, and consumer adoption trends. Essential reading for AI leaders, developers, and business decision-makers staying ahead of the fast-moving artificial intelligence landscape.Seven Key TakeawaysAI is simultaneously superhuman and subhuman by taskFunding concentration is extreme (83% to top 3)Consumer sentiment matters (QuitGPT forced contract changes)Open source catching up faster than expectedSovereign AI infrastructure acceleratingAgentic AI has moved to productionSkills premium is real but treadmill accelerating Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 15Claude Code: How Anthropic is using Claude Code
Claude Code: How Anthropic is using Claude CodeKey Quotes from Anthropic LeadersBoris Cherny, Head of Claude Code:"I think by the end of the year, everyone is going to be a product manager, and everyone codes. The title software engineer is going to start to go away. It's just going to be replaced by 'builder,' and it's going to be painful for a lot of people.""I think at this point it's safe to say that coding is largely solved.""I have not edited a single line by hand since November."Dario Amodei, CEO:"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code."Jack Clark, Co-founder:"Something that we found is that the value of more senior people with really, really well-calibrated intuitions and taste is going up."The Eight Best PracticesInvest in CLAUDE.md documentation — Configuration files Claude reads at startupClassify tasks: async vs synchronous — Know what to supervise vs delegateCreate self-sufficient verification loops — Tests before code, auto-run builds/lintsStart from clean git state — Checkpoint commits enable safe experimentationUse MCP servers for sensitive data — Better logging and access controlBuild multi-instance parallel workflows — Multiple Claude instances across reposUse screenshots and multimodal input — Figma, dashboards, UI imagesPrompt for simplicity — Interrupt and ask "Try something simpler"The AI PM Cert visit: https://aipmcert.com/ Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 14What People Actually Want from AI
Episode: What 81,000 People Want From AI: The Most Human AI Report So FarStudy: Anthropic Global AI Survey (December 2025)80,508 Claude users interviewed159 countries70 languagesAI-conducted open-ended conversationsPrimary Aspirations (What People Want)CategoryPercentageProfessional Excellence18.8%Personal Transformation13.7%Life Management13.5%Time Freedom11.1%Financial Independence9.7%Key insight: Productivity is often the surface story. When asked what productivity enables, people reveal deeper wants: family time, mental health, meaningful work, paths out of precarity. Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 13AI Politics in 2026: Pentagon AI Military
The Core DisputePentagon Position:Requires "all lawful use" provisions from AI vendorsWants flexibility for future applicationsFocused on Golden Dome, drone swarms, autonomous systemsAnthropic Position:Two non-negotiables: no mass surveillance of Americans, no fully autonomous weaponsWill not sign contracts creating legal pathways to prohibited usesChallenging supply chain risk designation in courtOpenAI Position:Explicit contractual prohibitions on mass surveillance, autonomous weapons, high-stakes automated decisionsCloud-only deployments with OpenAI personnel in loopMaintains control over safety stackWhat the Military Wants AI ForCurrent Uses:Intelligence analysisCyber operationsOperational planningThreat assessmentModeling and simulationClassified environment support Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 12AI and Jobs in 2026
Episode: AI and Jobs in 2026: What Anthropic's Labor Report Really Means for Workers, Policy, and BusinessReport: Anthropic Economic Index Labor Market Analysis (March 5, 2026)The Headline FindingNo mass displacement yet, but entry is getting harder:No systematic increase in unemployment for AI-exposed occupationsJob-finding rates for workers aged 22-25 in exposed fields: down ~14% vs 2022Unemployment rates: flatFirst visible effect: fewer young people getting their first footholdObserved Exposure: The New MeasureComponentWhat It MeasuresTheoretical Capability% of tasks LLMs could theoretically performObserved UsageWhat people actually do with Claude at workObserved ExposureCombined measure weighted toward automated/work-related usesWhy it matters: Labor markets are shaped by adoption, workflow design, regulation, and trust—not just model demos. Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 11AI News: ChatGPT Ads, Superbowl, Pentagon AI and Seedance 2.0
Coding Model Releases (Feb 12)All three dropped same day:OpenAI: GPT-5.3-Codex-Spark (purpose-built for engineering workflows)Google: Gemini 3 Deep ThinkAnthropic: Major funding round announcementThree-way battle for developer mindshare officially a sprintPentagon AI StrategyFramework: Five "Priority Sprint Projects"Initiatives:GenAI.mil for all-classification AI accessEnterprise agents playbookMandate: All military departments must identify 3+ priority AI projects within 30 daysLanguage:"Any lawful use" in procurement"Military AI dominance" framingDisney vs. ByteDanceAction: Cease-and-desist letters (Feb 14)Target: Seedance 2.0 video generationAccusation: Generating copyrighted characters (Star Wars, Marvel)MPA Statement: "Unauthorized use of U.S. copyrighted works on a massive scale"Implication: AI copyright fight moves from theoretical to legalHBR Productivity StudySource: UC Berkeley study in Harvard Business ReviewFinding: AI users worked faster, took on more tasks, worked longer hours—often without being askedImplication: AI isn't reducing workload—it's intensifying itRecommendation: Managers must design for outcomes, not just outputChinese AI DevelopmentsReleases (mid-February):DeepSeek V4: 1 trillion parameters, coding-focusedAlibaba Qwen 3.5ByteDance Doubao upgradeCost Advantage (RAND): Chinese models run at 1/6 to 1/4 cost of comparable U.S. systemsMarket Share: DeepSeek holds ~89% among AI users in ChinaSpotify Engineering TransformationAnnouncement (Feb 12): Top developers haven't manually written code since DecemberTools:Claude CodeInternal system "Honk"Shift: Engineers are now "full-time AI orchestrators"Implication: Future of engineering is operational, not hypotheticalKey TakeawaysCommercialization-safety tension is real — Ads + safety team dissolution not coincidentalBrand positioning matters — 11% user bump from values messagingCoding model wars intensifying — Three releases same dayGovernment AI accelerating — 30-day Pentagon mandateCopyright enforcement getting real — Disney vs. ByteDanceAI may increase workload — Design for outcomes, prevent burnoutCompanies MentionedOpenAI, Anthropic, Google, Disney, Paramount, ByteDance, Spotify, DeepSeek, Alibaba, Motion Picture Association, Department of DefensePeople MentionedSam Altman (OpenAI CEO)Joshua Achiam (OpenAI, now "chief futurist")Studies ReferencedUC Berkeley/HBR: AI and workload intensificationBNP Paribas: Super Bowl ad effectivenessRAND: Chinese AI cost analysis Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 10AGI: Sam Altman , Dario Amodei & Demis Hassabis Vision
Today we're doing something different. Instead of covering the news cycle, we're going deep on the three people who will likely shape how AGI arrives: Sam Altman of OpenAI, Dario Amodei of Anthropic, and Demis Hassabis of Google DeepMind.Each has a distinct philosophy about how to build transformative AI, what the risks are, and what happens to society when we get there. Understanding these differences isn't academic. These philosophies are determining the products we use, the policies being debated, and potentially the trajectory of human civilization. Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 9AI News February 1-8, 2026: The $650 Billion AI Arms Race Explodes
Key TakeawaysModel race is now platform race (Cowork vs Frontier)$650B Big Tech capex is the new realityProfessional software under genuine threatHardware competition intensifying (AMD, Broadcom)Regulatory complexity growing (federal vs state)AI adoption mainstream but returns concentratedSuper Bowl ads signal consumer battlegroundCompanies MentionedAnthropic, OpenAI, Alphabet/Google, Amazon, Meta, Microsoft, NVIDIA, AMD, Broadcom, Thomson Reuters, LegalZoom, HP, Intuit, Oracle, State Farm, Uber, Cisco, BBVA, T-Mobile, Cerebras, Goodfire, Bedrock Robotics, Sana, Perplexity, Boston Dynamics, Caterpillar, Khan Academy Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 8AI News January 26-30, 2026: The Verticalization Era Begins
Major Launches This WeekProductCompanyDomainKey FeaturePrismOpenAIScienceGPT-5.2 with 400K token context for researchGOV.UK AssistantAnthropicGovernmentAgentic employment support for UKPersonal IntelligenceGoogleConsumerGmail + Photos integration in AI ModeAI Overviews upgradeGoogleSearchGemini 3 default, follow-up questionsOpenAI Prism DetailsModel: GPT-5.2400,000-token context window (~800 pages)Fine-tuned for mathematical and scientific reasoningNative LaTeX understandingVisual Synthesis for diagrams to codePricing:Personal: Free (unlimited projects/collaborators)Education: Institutional tier (TBD)Enterprise: Compliance features (TBD)Built on: Acquired startup Crixet (LaTeX platform)Competition:Overleaf (LaTeX collaboration)Mendeley/Zotero (reference management)Google Scholar integration (anticipated)TRAIN Act SummaryName: Transparency and Responsibility for Artificial Intelligence Networks ActSponsors: Rep. Madeleine Dean (D-PA), Rep. Nathaniel Moran (R-TX)Key Provisions:Administrative subpoena for training data disclosure"Subjective good faith belief" standard for requestsNon-compliance creates "rebuttable presumption of copying"Impact: Gives copyright holders discovery rights previously unavailableHardware & InfrastructureASML Q4 2025:Orders: €13.2B ($15.8B) — record quarterAnalyst forecast: €6.85B (far exceeded)Q4 sales: €9.72BFull 2025 sales: €32.7BStock surge: ~6%Intel: Activated ASML EXE:5200 High-NA EUV systemReduces manufacturing steps: 40 → 10Spending Forecasts (Gartner):2026: $2.53 trillion2027: $3.33 trillion Hosted on Acast. See acast.com/privacy for more information.

S4 Ep 7Davos WEF 2026: Elon Musk, Satya Nadella & AI’s Tsunami – 7 Brutal Truths for Leaders
In this episode, we break down the most uncomfortable AI truths that surfaced at Davos WEF 2026 – from IMF chief Kristalina Georgieva calling AI a “tsunami” for the global job market to Anthropic CEO Dario Amodei warning that 50% of white-collar jobs could be disrupted within five years. We unpack Elon Musk’s claim that energy, not algorithms, is now the real AI bottleneck, and NVIDIA’s Jensen Huang framing AI as the largest infrastructure build-out in human history. You’ll hear how McKinsey’s new agentic AI narrative ties into a projected 2.9 trillion dollars in value, why OpenAI, Microsoft, and Google are racing for the AI interface layer, and what Demis Hassabis, Satya Nadella, and Yuval Noah Harari really signaled about AGI, open vs closed models, and “everything made of words” being eaten by AI. Perfect for founders, executives, and policymakers who want the real story behind Davos 2026 and what it means for jobs, power, and leadership in the AI era. Hosted on Acast. See acast.com/privacy for more information.