
An Analog Brain In A Digital Age | With Marco Ciappelli
240 episodes — Page 1 of 5
Book: Deep Future — Creating Technology That Matters | An Interview with Pablos Holman | An Analog Brain In A Digital Age With Marco Ciappelli
New Book: Healing the Sick Care System — Why People Matter | An Interview with Gil Bashe | An Analog Brain In A Digital Age With Marco Ciappelli
On the Internet, Nobody Knows You're Not Human — And Nobody's Asking | Written by Marco Ciappelli & Read by Tape3
Before the Robots Run. More reflections from RSAC 2026 — The Power of the Community and the Machines We Invited In. | Written By Marco Ciappelli & Read By Tape3
Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World | Written By Marco Ciappelli & Read by Tape3
Marketing, Brand, And Culture: Are You Paying the Silicon Valley Tax? A Conversation with Nick Richtsmeier of CultureCraft | Hosted by Marco Ciappelli
When Sci-Fi Becomes the Business Plan | A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures | Hosted by Marco Ciappelli

Ep 233Protecting Kids Online Since 2007 and in the Age of AI: Ben Halpert on Savvy Cyber Kids at RSAC 2026
In this episode from RSA Conference 2026, Marco Ciappelli sits down with Ben Halpert, founder of the non-profit organization Savvy Cyber Kids, to discuss the critical intersection of child development and technology. Since its founding in 2007, Savvy Cyber Kids has been on a mission to provide parents and educators with the tools needed to guide children through the digital world. Ben explains why introducing technology too early can be detrimental to a child’s emotional preparedness and brain development, and why adult-led guidance is essential even when kids seem like "tech experts". In this conversation, we explore: The Evolution of Threats: Moving from MySpace and CRT monitors to 24/7 access via mobile devices. Early Intervention: Why the "rhyme and picture book" approach works for children as young as three to teach concepts like online aliases and stranger safety. Safe AI for Kids: Introducing a new partnership with Chaperone, a platform featuring "homework mode" and parental controls to ensure AI is a tool for learning, not a shortcut for thinking. Going Global: How the organization has expanded internationally with materials translated into Spanish, German, French, and Hebrew. About Our Guest Ben Halpert is a cybersecurity veteran with over 25 years of experience and the founder of Savvy Cyber Kids. He is dedicated to helping parents navigate the "wild" of the internet with positive, developmentally appropriate programming. Resources Savvy Cyber Kids Website: savvycyberkids.org More RSAC 2026 Coverage: itspmagazine.com/rsac Marco's Website: Marcociappelli.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 232Everyone Is Talking About Agentic AI at RSAC 2026. Almost Nobody Is Saying Anything Different | With Marco Ciappelli and Theresa Lanowitz
Marco Ciappelli sits down with cybersecurity evangelist and thought leader Theresa Lanowitz at the end of day one on the expo floor for a conversation that cuts through the noise — from shadow AI and leadership accountability, to brand identity, to why most companies here can't articulate a message above the fray. Plus: a Peloton story that accidentally became the best explanation of brand loyalty you'll hear all week. Chapters: - Judge Sentences CEO to 8 Hours on the RSAC Floor - End of Day One: Setting the Scene - Who Is Theresa Lanowitz - The Binary View of AI: Love It, Fear It, or Find the Gray - Leadership's Role in the AI Transformation - Shadow AI: The Insider Threat Nobody Is Naming - Why Some Companies Still Say No to AI - Fighting With Your LLM (We All Do It) - AI Slop and the Brand Differentiation Problem - The Peloton Story: What Real Brand Loyalty Looks Like - RSAC 2026: Everyone Sounds the Same - Where Is Agentic AI Actually Going - Integration, Orchestration, ROI: The Real Questions - Make AI Your Own What's actually covered: → Why agentic AI is dominating RSAC 2026 — and why it all sounds the same → Shadow AI: the insider threat nobody is calling an insider threat → What strong brand presence actually looks like (hint: it's not a circus tent) → Why fear — not budget — is the real reason companies still say no to AI → Integration, orchestration, ROI: what comes after the hype → The one message that matters: make AI your own 🔗 More from RSA Conference 2026: itspmagazine.com/rsac Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 231New Book: Climate Capital — Investing in the Tools for a Regenerative Future | An Interview with Tom Chi | An Analog Brain In A Digital Age With Marco Ciappelli
New Book: Climate Capital — Investing in the Tools for a Regenerative Future | An Interview with Tom Chi | An Analog Brain In A Digital Age With Marco Ciappelli What if the economy isn't broken — just badly designed? Tom Chi, Google X founding member, inventor of 77 patents, and venture capitalist at At One Ventures, joined me on An Analog Brain In A Digital Age to discuss his new book Climate Capital: Investing in the Tools for a Regenerative Future. From the streets of Florence to the strip malls of Silicon Valley, from the mechanics of attention capture to the physics of ecological economics, this conversation goes far beyond climate. It's about how we design the systems we live inside — and whether we have the will to redesign them before it's too late. 📺 Watch | 🎙️ Listen | marcociappelli.com Article Body Tom Chi has worked on things that changed the world. Microsoft Office. Web search. The self-driving car. Google Glass. He'll tell you himself that not all of them were hits, and he's fine with that — that's what it means to be an inventor. But what he's working on now is different in scale from anything before. Not a product. Not a platform. A redesign of the global economy. His new book, Climate Capital: Investing in the Tools for a Regenerative Future, starts from a premise that sounds radical until you think about it for more than a few minutes: economics is a design discipline. And right now, it's poorly designed. Not maliciously — poorly. We built systems optimized for short-term capital extraction, and we're living with the consequences. The question Tom is asking is whether we can redesign them before those consequences become irreversible. He didn't get there through ideology. He got there through Florence. Tom was auditing sustainable MBA courses alongside his partner when he was invited to a conference in Italy. He landed, got a day off, wandered the streets — and something clicked. The entire city is built from sustainable materials. And it's one of the most beautiful places on earth. That moment demolished an assumption he didn't even know he was carrying: that sustainable living means downgrading. Florence is a 2,000-year-old counterexample to every joke about Birkenstocks and cold showers. We knew how to do this. We just forgot. Which brings us to the first big thread of our conversation: the pattern of forgetting. We talked about this in the context of technology, not history. Specifically, how the shift from software you paid for to software supported by advertising quietly changed everything. When you pay for a tool, the goal is to make it better. When the tool is supported by advertisers, the goal is to keep you inside it as long as possible. Clippy used to annoy us because it interrupted our train of thought. Now interrupting our train of thought is the entire business model. Tom has a phrase for what's happening at scale: cognitive despoiling. We spent the 20th century strip mining the physical resources of the planet. We're spending the 21st century strip mining the cognitive resources of humanity. There's a finite number of coherent thoughts this civilization can produce. And we're burning through them — with misinformation, amygdala triggers, and dopamine loops — the same way we burned through forests and waterways. The damage is invisible because it's underwater, like ocean trawling. But it's real. And it compounds across generations. This is where I had to push back a little. Because I grew up in Florence. I made the jump to digital. I love my vinyls and I love my streaming library. I'm part of the contradiction he's describing. And I asked him: given all this, where do you even start? His answer is the most practical thing I've heard in a long time. Start with physical businesses. The ones actually causing most of the damage — to water, soil, air, biodiversity. And here's the part that almost nobody is talking about: 90% of the cost structure of a physical business already aligns ecological and economic goals. Fewer raw materials used means lower feedstock costs and less extraction. Less energy consumed means lower processing costs and fewer emissions. Shorter supply chains mean lower logistics costs and fewer transport emissions. The economy and the ecology are already pointing the same direction on 90% of what matters. The 5% that isn't aligned — pollution — is what the lobbyists fight about. So that's what dominates the news. And that's why we think this is harder than it is. Tom's firm, At One Ventures, is built around this insight. They invest in what he calls the triad: disruptive deep tech that delivers radically better unit economics and radically better environmental outcomes at the same time. Their portfolio companies don't sell sustainability. They sell efficiency. The ecological benefit is baked in by design. The customers buy it because it's cheaper and better. The planet wins as a side effect. That's the book. Part toolkit, part framework, part demonstration t

Ep 230Do You Know What's In Your Software? A Cybersecurity Story with Manifest Cyber | A Brand Highlight Conversation with Daniel Bardenstein, Co-Founder at Manifest Cyber
There is a question that sounds almost embarrassingly simple. After a vulnerability is discovered in a piece of widely used software — something like Log4Shell, which shook the security world and left hundreds of thousands of organizations exposed overnight — the question organizations scrambled to answer was this: where is this code, and what does it touch? Most couldn't answer it. Not the Fortune 500 companies. Not the government agencies. Not the critical infrastructure operators. Not the hospitals or the banks or the utilities. They had built and bought mountains of software over years and decades, and when the moment came to understand what was actually inside it, they were effectively blind. That gap is exactly what Daniel Bardenstein set out to close when he co-founded Manifest Cyber in 2023. And in a conversation on ITSPmagazine's Brand Highlight series, he made a case for technology transparency that is hard to argue with — not because it's technically complex, but because the analogy he draws is so strikingly obvious once you hear it. "If you want to buy a house, you get to go inside the house, do the home inspection," he said. "You want to buy food from the grocery store — you can look at the ingredients. Even our clothes tell you what they're made of, how to care for them, and where they're from." But software? The technology running hospital MRI machines, weapon systems, financial infrastructure, water delivery? No transparency required. No ingredient label. No inspection rights. Just trust. That trust, as Log4Shell demonstrated, is a vulnerability in itself. Bardenstein came to this problem with credentials that few founders in the space can claim. Before starting Manifest, he spent four and a half years in the US government leading large-scale cyber programs and serving as technology strategy lead at CISA — the Cybersecurity and Infrastructure Security Agency. He saw firsthand how defenders are perpetually at a disadvantage, operating without the basic visibility they need to do their jobs. His mission became building the tools to change that. The problem, he's quick to point out, has not improved in the years since Log4Shell. Software supply chain attacks have multiplied — XZ Utils, NPM Polyfill, and others following the same pattern: trusted software becomes the attack vector, and it spreads fast. Meanwhile, most security teams are still operating with SCA tools that generate noisy, overwhelming alerts and vendor risk programs built on Excel spreadsheets and questionnaires rather than actual empirical data about the security of what they're buying. "Security teams have a false sense of security," Bardenstein said. The gap between what organizations think they know and what they actually know about their software supply chains remains dangerously wide. Manifest Cyber addresses this across the full lifecycle. For organizations that build software, the platform maps every open source dependency, assesses it for risk, and ensures developers can write more secure code without losing velocity. For organizations that buy software — which is everyone — it finds risks before procurement, then continuously monitors every third party component so that when something breaks, they know the blast radius in seconds, not weeks. The timing matters. Regulation is catching up to the problem. The EU AI Act, the Cyber Resilience Act, and a growing body of global policy are beginning to demand exactly the kind of software supply chain transparency that Manifest is built to provide. Organizations that wait to build this capability will find themselves scrambling to comply — those that build it in now will have it as a competitive advantage. The ingredient label for software has always been missing. Manifest Cyber is writing it. ________________________________________________________________ Marco Ciappelli interviews Daniel Bardenstein, CEO & Co-Founder of Manifest Cyber, for ITSPmagazine's Brand Highlight series. HOST Marco Ciappelli — Co-Founder & CMO, ITSPmagazine | Journalist, Writer & Branding Advisor 🌐 https://www.marcociappelli.com 🌐 https://www.itspmagazine.com GUEST Daniel Bardenstein, CEO and Co-Founder of Manifest Cyber https://www.linkedin.com/in/bardenstein RESOURCES Manifest Cyber: https://www.manifestcyber.com Are you interested in telling your story? ▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full ▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight ▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight KEYWORDS Manifest Cyber, software supply chain security, SBOM, Log4Shell, open source vulnerability, technology transparency, Daniel Bardenstein, CISA, software composition analysis, third party risk, EU Cyber Resilience Act, AppSec Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 229New Book! Lost in Time — Our Forgotten and Vanishing Knowledge | Forgotten Technology, Ancient Wisdom & Digital Amnesia | An Interview with Jack R. Bialik | An Analog Brain In A Digital Age With Marco Ciappelli
New Book: Lost in Time — Our Forgotten and Vanishing Knowledge | An Interview with Jack R. Bialik | An Analog Brain In A Digital Age With Marco Ciappelli There's a particular arrogance embedded in how we talk about progress. We speak about innovation as if it moves in one direction only — forward, upward, smarter, faster. But what if the line isn't straight? What if it loops, doubles back, and occasionally vanishes entirely? That's the uncomfortable question at the center of my conversation with Jack R. Bialik. His book Lost in Time: Our Forgotten and Vanishing Knowledge doesn't read like a history lesson. It reads like a case file — evidence, example by example, that the civilization we assume is the most advanced in human history is also, in some critical ways, deeply amnesiac. Take cataract surgery. We learned it in the 1700s, right? Except we didn't. Indians were performing it in 800 BC. The ancient Egyptians and Babylonians had diagrams of the procedure dating back to 2,400 BCE. The knowledge existed, worked, and then — somewhere in the chaos of collapsing empires and burning libraries — it vanished. We didn't progress past it. We forgot it, and then reinvented it from scratch, centuries later, convinced we were doing something new. Or the Baghdad Battery: clay pots, 2,000 years old, that when filled with acid can generate 1.1 volts of electricity. We don't know what they used them for. We don't know who figured it out. We just know it worked, it existed, and then it didn't anymore. This is what Bialik calls the pattern of loss — and it's not random. It follows catastrophe: the Library of Alexandria, the systematic destruction of Mayan records, the slow erosion of oral traditions as writing systems took over. Knowledge disappears when the systems that carry it collapse. And here's where the conversation gets uncomfortably relevant: we are building those systems right now, and we are not thinking about how long they'll last. The curator at the Computer History Museum told Bialik that to preserve the data from early IBM PCs and Macintosh computers, they had to print it on paper. The floppy drives had become brittle. The formats were unreadable. The digital archive was failing — and the only solution was to go analog. A vinyl record from the 1920s still plays. A CD from the 1980s may not survive another decade. I've been thinking about this since we recorded. My brain is analog — that's not just a podcast title, it's a philosophy. I grew up in Florence, surrounded by things that had survived centuries because they were made to last: stone, fresco, manuscript. Then I jumped on the digital train like everyone else, seduced by infinite libraries on my phone, music on demand, knowledge at my fingertips. But what Bialik is pointing out is that fingertips are fragile. And so are hard drives. The deeper issue isn't storage format. It's the distinction Bialik draws between knowledge and wisdom. Knowledge is the data — the cataract surgery technique, the battery design, the pyramid engineering. Wisdom is knowing why it matters, when to use it, and what the consequences might be. We've gotten extraordinarily good at accumulating knowledge. We are considerably worse at transmitting wisdom. And wisdom, Bialik argues, doesn't live in databases. It lives in the space between people — in stories, in teaching, in the slow transmission of judgment across generations. That's why oral tradition survived when everything else failed. Not because it was more sophisticated, but because it was more human. It didn't require a device to run on. I don't know how to solve the digital longevity problem. Neither does Bialik — not yet. But I think the first step is admitting we have one. That's actually one of the quietest, most powerful arguments in the book: be humble. We don't know everything. We never did. And some of the things we've lost might be exactly what we need right now. The question isn't just what we've forgotten. It's what we're forgetting today, while we're too busy scrolling to notice. Grab Lost in Time: Our Forgotten and Vanishing Knowledge — link below — and spend some time with a perspective that goes very, very far back. Which is maybe the only way to see very, very far forward. And if this kind of conversation is what you come here for, subscribe to the newsletter at marcociappelli.com. More of this. Less noise. — Marco Ciappelli Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 ____________ About Marco Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, an

Ep 228Agade: The AI-Powered Wearable Robots That Protect Workers, Not Replace Them | A Brand Highlight Conversation with Lorenzo Aquilante, Co-Founder and AGADE
Agade: The AI-Powered Wearable Robots That Protect Workers, Not Replace Them AI Meets Human CraftsmanshipThere's something poetic about a technology born to help people with muscular dystrophy finding its second life on factory floors and logistics warehouses. That's the story of Agade, an Italian deeptech startup that began as a research project at Politecnico di Milano and evolved into something far more ambitious: a mission to preserve human craftsmanship in an age of automation.I sat down with Lorenzo Aquilante, CEO and co-founder of Agade, to talk about their journey from healthcare innovation to industrial exoskeletons—and what it was like showcasing their latest product at CES 2026.The origin story matters here. Back in 2017, researchers at Politecnico di Milano started developing exoskeletons for people affected by muscular dystrophy. They created something different—a semi-active model powered by AI that recognizes when a user is lifting and responds accordingly. It wasn't just about motors and sensors. It was about intelligence.Then companies came knocking. Manufacturing firms, logistics operations, industries where human workers still matter because their skills, experience, and judgment can't be replaced by machines. They saw potential. Why not use this technology to protect the people doing the heavy lifting—literally?Agade was founded in 2020 with a clear mission: preserve craftsmanship against the physical toll of material handling. Not replace humans. Protect them.The company now has two products. The first, launched in 2024, focuses on shoulder assistance. The second—the one they brought to CES 2026—targets the lower back, which makes sense when you consider that back pain is practically an occupational hazard for anyone moving materials all day.What makes Agade's approach different is that semi-active AI system. The exoskeleton knows when you're lifting. It responds. It's not just a passive brace or a fully motorized suit that takes over. It's somewhere in between—smart enough to help, light enough to wear all day.Lorenzo emphasized something that resonated with me: the importance of feedback. From day one, Agade has been obsessed with real-world testing. Not lab conditions. Actual workers doing actual jobs. Because the buyer isn't the user—companies purchase these for their employees—and that creates a unique dynamic. You need both sides to believe in the technology.The CES experience brought that home. There's always the initial wow factor when someone sees a wearable robot with motors and sensors. But the real work happens after the demo, when users tell you what needs to improve. That's where the collaboration lives.And here's what struck me most about this conversation: Agade isn't trying to remove humans from the equation. They're trying to keep humans in it longer, healthier, and more capable. In a world racing toward full automation, there's something refreshing about a company betting on human skill—and building technology to protect it.The products are available globally. You can reach Agade through their website at agadexoskeletons.com, find them on LinkedIn and other social channels, and even arrange trials before committing to a purchase.For those of us watching the intersection of AI, robotics, and human labor, Agade represents a different path. Not humans versus machines. Humans with machines. Tools that amplify rather than replace.That's a story worth telling.Marco Ciappelli interviews Lorenzo Aquilante, CEO & Co-Founder of Agade, for ITSPmagazine's Brand Highlight series following CES 2026.>>> Marcociappelli.comGUESTLorenzo Aquilante, CEO and co-founder of Agadehttps://www.linkedin.com/in/lorenzo-aquilante-108573b0/RESOURCESAGADE: https://agade-exoskeletons.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAgade, exoskeleton, CES 2026, wearable robotics, AI, future of work, industrial exoskeleton, made in Italy, workplace safety, deeptech, robotics. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 227Chat Control: The EU Law That Could End Privacy and Why Breaking Encryption Won't Stop Criminals | A Conversation with Cybersecurity Expert John Salomon | Redefining Society and Technology Podcast with Marco Ciappelli
None of Your Goddamn BusinessJohn Morgan Salomon said something during our conversation that I haven't stopped thinking about. We were discussing encryption, privacy laws, the usual terrain — and he cut through all of it with five words: "It's none of your goddamn business."Not elegant. Not diplomatic. But exactly right.John has spent 30 years in information security. He's Swiss, lives in Spain, advises governments and startups, and uses his real name on social media despite spending his career thinking about privacy. When someone like that tells you he's worried, you should probably pay attention.The immediate concern is something called "Chat Control" — a proposed EU law that would mandate access to encrypted communications on your phone. It's failed twice. It's now in its third iteration. The Danish Information Commissioner is pushing it. Germany and Poland are resisting. The European Parliament is next.The justification is familiar: child abuse materials, terrorism, drug trafficking. These are the straw man arguments that appear every time someone wants to break encryption. And John walked me through the pattern: tragedy strikes, laws pass in the emotional fervor, and those laws never go away. The Patriot Act. RIPA in the UK. The Clipper Chip the FBI tried to push in the 1990s. Same playbook, different decade.Here's the rhetorical trap: "Do you support terrorism? Do you support child abuse?" There's only one acceptable answer. And once you give it, you've already conceded the frame. You're now arguing about implementation rather than principle.But the principle matters. John calls it the panopticon — the Victorian-era prison design where all cells face inward toward a central guard tower. No walls. Total visibility. The transparent citizen. If you can see what everyone is doing, you can spot evil early. That's the theory.The reality is different. Once you build the infrastructure to monitor everyone, the question becomes: who decides what "evil" looks like? Child pornographers, sure. Terrorists, obviously. But what about LGBTQ individuals in countries where their existence is criminalized? John told me about visiting Chile in 2006, where his gay neighbor could only hold his partner's hand inside a hidden bar. That was a democracy. It was also a place where being yourself was punishable by prison.The targets expand. They always do. Catholics in 1960s America. Migrants today. Anyone who thinks differently from whoever holds power at any given moment. These laws don't just catch criminals — they set precedents. And precedents outlive the people who set them.John made another point that landed hard: the privacy we've already lost probably isn't coming back. Supermarket loyalty cards. Surveillance cameras. Social media profiles. Cookie consent dialogs we click through without reading. That version of privacy is dead. But there's another kind — the kind that prevents all that ambient data from being weaponized against you as an individual. The kind that stops your encrypted messages from becoming evidence of thought crimes. That privacy still exists. For now.Technology won't save us. John was clear about that. Neither will it destroy us. Technology is just an element in a much larger equation that includes human nature, greed, apathy, and the willingness of citizens to actually engage. He sent emails to 40 Spanish members of European Parliament about Chat Control. One responded.That's the real problem. Not the law. Not the technology. The apathy.Republic comes from "res publica" — the thing of the people. Benjamin Franklin supposedly said it best: "A republic, if you can keep it." Keeping it requires attention. Requires understanding what's at stake. Requires saying, when necessary: this is none of your goddamn business.Stay curious. Stay Human. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/John Salomon Experienced, international information security leader. vCISO, board & startup advisor, strategist.https://www.linkedin.com/in/johnsalomon/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 226Paoletti Custom Guitars at NAMM 2026: Handcrafted in Florence Italy from Wine Barrel Wood | A Brand Highlight Conversation with Filippo Martini, Managing Director at Paoletti Guitars | NAMM 2026 Coverage
Wine Barrels, Duomo Marble, and Florence: Paoletti Custom Guitars at NAMM 2026I've been away from Florence for 25 years. I didn't know there was a guitar company like this back home.At NAMM 2026, I found Filippo Martini from Paoletti Custom Guitars—a boutique manufacturer based in the heart of Tuscany, building instruments that are equal parts guitar and artwork.Paoletti does something no one else does: they build guitars from chestnut wood sourced from Italian wine barrels. The material offers a wide harmonic spectrum, but it's difficult to work with. You need to know how to handle it. Founder Fabrizio Paoletti figured it out, and now every guitar they produce shows the natural grain—no opaque finishes, no hiding the wood.The craftsmanship runs deep. Bridges, pickguards, pickups—all made in-house. Necks carved from Canadian maple, roasted on-site. 99% of the process happens in Tuscany. As Filippo put it, "Kilometer zero." Zero miles. Everything local except the screws.Their model is 100% custom. You don't buy a Paoletti off the rack. You tell them your style, your sound, the genre you play. They build around your vision while keeping the Italian essence intact—chestnut wood, Italian-made components, tailored to your idea.But what stopped me cold was the Duomo collection.Eight individual guitars, each hand-engraved by Fabrizio Paoletti himself. Three years of work. The subject: Florence's cathedral—the Duomo di Santa Maria del Fiore.This isn't just decoration. Paoletti secured an official partnership with the Opera del Duomo, the authority that oversees the cathedral. The back of each guitar reproduces the marble floor pattern from inside the Duomo. And when the collection is complete this October, every guitar will contain an actual piece of marble from the cathedral.I got shivers standing there.This is what happens when guitar making meets Italian heritage. It's not about specs or market positioning. It's about place, history, and craft passed down through generations.Filippo invited me to visit the workshop in Florence when I return in April. I'm going. I want to see where this happens—where wine barrel wood becomes an instrument, where cathedral marble gets embedded into a guitar body, where a team of artisans builds one-of-one pieces for players around the world.Florence is known for many things. Leather. Art. Architecture. The Renaissance itself. Now I know it's also home to some of the most distinctive guitars being made anywhere.Paoletti proves that boutique doesn't mean small ambitions. They're partnering with galleries in Dubai, working with the Duomo authorities, and bringing Florence to NAMM.Not bad for a company I didn't even know existed until I walked the show floor and heard an Italian accent.Sometimes you find home in unexpected places.Marco Ciappelli interviews Filippo Martini from Paoletti Custom Guitars at NAMM 2026 for ITSPmagazine.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTFilippo Martini Managing DIrector at Paoletti Guitars | Florence | Tuscany | ItalyRESOURCESLearn more about Paoletti Guitars: https://www.paolettiguitars.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 225Yamaha at NAMM 2026: Introducing Chris Buck Custom Revstar Guitar, Pacifica SC, and a deep dive into the BB735 Bass | A Brand Highlight Conversation with Andy Winston, Product Training Specialist at Yamaha | NAMM 2026
60 Years Forward: Yamaha at NAMM 2026Yamaha at NAMM 2026: Chris Buck Revstar, Pacifica SC & 60 Years of Guitar InnovationSome brands chase nostalgia. Yamaha builds forward.At NAMM 2026, I spoke with Andy Winston to talk about 60 years of Yamaha guitar design—and why this company keeps delivering instruments that punch way above their price point.The conversation started with the Chris Buck Signature Revstar. Buck is the guitarist for Cardinal Black, and he's earned his own model. The specs tell the story: overwound P90 pickups for a hotter sound, wraparound tailpiece with adjustable saddles, stainless steel frets, lightweight tuners, and those old-school inlays from the first-generation Revstar. No boost circuit. Buck wanted it stripped to essentials.Then Andy dropped a tease: Matteo Mancuso is getting his own Revstar this summer. The Italian virtuoso. That's a statement.We moved to the new Pacifica SC—Yamaha's answer for T-style players. Humbucker in the neck, single coil in the bridge, and pickups designed in partnership with Rupert Neve's team. The boost circuit under the bridge pickup gives you five sounds from two pickups. Made in Indonesia at $999 or Made in Japan with compound radius fretboard and IRA wood treatment at $2,199.I bought my nephew a Pacifica. Entry level, around $200. It works. That's Yamaha's philosophy—you can start at $200 and work your way up to a Mike Stern signature model without ever leaving the family.But here's what stuck with me.Andy said something that defines Yamaha's approach: "We don't do reissues. You're never gonna see us reissue a 1972."Sixty years of guitar history, and they're not looking backward. The Revstar draws inspiration from the 1970s Super Flight, sure—but it's chambered mahogany, tuned to eliminate harsh mid-range frequencies. Yamaha builds pianos, violins, marimbas. They know how to tune wood. They apply that knowledge to electric guitars in ways other companies don't.The BB Bass series came next. String-through body with 45-degree break angle. Extra bolts pulling the neck tight into the pocket. A maple stripe running through the center of the body for note response. Active/passive switching. Five-ply neck. Professional features at prices that don't require a car payment."We give people more instrument than what a price tag says," Andy told me.That's not marketing. That's mission.Before we wrapped, Andy shared a personal story. In 1977, hair down to his shoulders, bell bottoms on, his mom decided he was serious about guitar. She bought him a Yamaha FG-75. His first real acoustic. He doesn't have that one anymore, but he found a replacement. Had to.That's brand loyalty earned over decades. Not through heritage mythology—through instruments that work, that last, that give players what they need without emptying their wallets.Sixty years of guitar design. No reissues. Just forward.Yamaha keeps proving that innovation and accessibility aren't mutually exclusive.Marco Ciappelli interviews Andy Winston from Yamaha at NAMM 2026 for ITSPmagazine.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAndy WinstonProduct Training Specialist at YamahaRESOURCESLearn more about Yamaha Guitars: https://www.yamaha.com/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 224The Gift of Music: Guitar Center Foundation at NAMM 2026 | A Conversation with Michelle Wolff, Guitar Center Foundation | The NAMM Show 2026 Event Coverage | On Location with Sean Martin and Marco Ciappelli
At the Guitar Center Foundation, music is treated as a shared resource rather than a luxury. During this conversation at the NAMM Show 2026, Michelle Wolff, representing the Foundation, explains how access to real instruments can change the trajectory of a student, a patient, or a veteran simply by making music possible in the first place.The Foundation’s work centers on donating thousands of instruments to schools, hospitals, and veteran centers, with a focus on communities where funding for music programs is often the first thing cut. Through a structured grant process, organizations apply for instruments quarterly, with roughly 150 requests reviewed each cycle. About 30 of those requests are fulfilled, helping sustain programs that might otherwise disappear.Beyond instrument donations, the Foundation is expanding how it shows up in communities. Plans include live donation events that bring instruments directly into schools and hospitals, often paired with artist participation to create meaningful, memorable moments. New donor and ambassador programs are also taking shape, designed to broaden awareness and bring more voices into the mission.Partnerships play a major role in that effort. The conversation highlights recent collaboration tied to the 100 Billion Meals initiative, where music, visual art, and social impact intersect to amplify multiple causes at once. These partnerships extend the Foundation’s reach while reinforcing the idea that music can support broader humanitarian goals.Wolff also shares a personal connection to the mission. As a former vocal performance major at the University of Texas Butler School of Music, she understands how deeply musicians identify with their craft. After experiencing vocal injury herself, she speaks to the importance of supporting musicians through change and helping them build identities that extend beyond a single instrument, without losing music as a core part of who they are.That perspective brings the Foundation’s work full circle. Access to instruments is not only about creating future professionals. It is about expression, resilience, and giving people the chance to discover what music can mean in their own lives.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________Guitar Center Foundation: https://www.guitarcenterfoundation.org100 Billion Meals initiative: https://100billionmeals.orgThe NAMM Show 2026: https://www.namm.org/thenammshow/attendMusic Evolves: Sonic Frontiers Newsletter | https://www.linkedin.com/newsletters/7290890771828719616/More from Marco Ciappelli on Redefining Society and Technology Podcast: https://redefiningsocietyandtechnologypodcast.com/Want to share an Event Briefing as part of our event coverage? Learn More 👉 https://www.studioc60.com/performance#briefingWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.studioc60.com/performance#ideasKEYWORDS: music charity, instrument donations, namm show 2026, music education access, supporting musicians, music nonprofit, guitar center foundation, music programs schools, music and community, philanthropy in music, guitar center, michelle wolff, marco ciappelli Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 223AI Art vs Human Creativity — The Real Difference and why AI Cannot Be An Artist | A Conversation with AI Expert Andrea Isoni, PhD, Chief AI Officer, AI speaker | Redefining Society and Technology with Marco Ciappelli
The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 222Gibson Guitars at NAMM 2026: 131 Years of Craftsmanship, Innovation & Functional Art | A Brand Highlight Conversation with Jeff Stempka, Global Brand & Marketing at Gibson | NAAM 2026
131 years. Still handcrafted in Nashville. Still changing music.At NAMM 2026, Sean Martin and Marco Ciappelli sat down with Jeff Stempka, Global Brand & Marketing at Gibson & Gibson Custom, to talk about what makes this brand untouchable—the craftsmanship, the artist connection, and why people will stretch their budget just to hold one.From the Les Paul Studio Double Trouble to the ES-335 Fifties and Sixties refresh, Gibson is honoring its legacy while pushing forward.Jeff said it best: "These are tools that enable incredible musicians to take the instruments and do something we never intended."🎸 Les Paul Studio Double Trouble – Modern collection, coil splits, pure bypass 🎸 ES-335 Fifties & Sixties – Neck profiles for every player 🎸 100 Years of Flat Tops – From Orville Gibson to todayThis isn't just gear. It's functional art. It's history. It's emotion.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTJeff StempkaGlobal Brand & Marketing at GibsonRESOURCESLearn more about Gibson Guitars: https://www.gibson.com/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 221PRS Guitars at NAMM 2026: John Mayer Wild Blue Silver Sky & Ed Sheeran Baritone Revealed | A Brand Highlight Conversation with Alex Chadwhick from PRS Guitars | NAAM 2026
Vintage Dreams, Modern Hands: A Conversation with PRS Guitars at NAMM 2026They were literally closing down the show floor when I grabbed Alex Chadwick from PRS Guitars for a conversation I wasn't willing to miss.We'd been talking off-mic about something that kept nagging at me—this tension between technology and creativity that runs through everything in the music world right now. So I hit record, security guards circling, and asked him straight: Is technology helping musicians become better artists, or do you still need to learn the hard way?His answer was refreshingly honest. Technology isn't inherently good or bad. It's a tool. When it helps people be more expressive, more creative—that's the win. When it gets in the way of that expression? That's when we have a problem.It's the kind of nuance that gets lost in the usual gear coverage.PRS brought some beautiful new instruments to NAMM this year. The John Mayer Wild Blue Silver Sky stopped people in their tracks—a sharp turquoise finish with the first matching headstock ever produced from their Maryland factory on a Silver Sky. Limited to a thousand pieces worldwide. For Mayer fans and Silver Sky devotees alike, this one feels special.Then there's the Ed Sheeran Semi-Hollow Piezo Baritone. A 27.7-inch scale instrument tuned a fifth below standard, with discrete outputs for both magnetic and piezo elements. But here's what got me: each guitar ships with a signed print of Sheeran's original artwork that appears on the body. He's a visual artist too. The instrument becomes a canvas for multiple creative expressions at once.But the conversation that really stuck with me was about vintage guitars and why we romanticize them so much.Those 1950s and 60s instruments—the ones on posters, in documentaries, making the music that shaped entire generations—they've become holy relics. And the ones that actually sound magical? They cost as much as a house now. So how does anyone access that?Chadwick explained something about PRS's philosophy that I found genuinely compelling. They don't go back to the fifties. They go back to 1985. That gives them freedom—they can draw inspiration from those holy grail instruments without being trapped by their quirks, their inconsistent tolerances, their aged components. They can take what made those guitars legendary and build it into something repeatable, accessible, and comfortable.The goal, he said, is to create instruments that get out of the way. Guitars that let the person be more expressive instead of fighting against limitations.That phrase has been echoing in my head since I left Anaheim. Instruments that get out of the way.Because that's really what this is about, isn't it? All the gear, all the technology, all the innovation—it only matters if it helps someone find their voice. Make their own music. Tell their own story.PRS seems to understand that. In a world obsessed with vintage nostalgia and spec-sheet comparisons, they're building for expression.And that's worth a conversation, even when security is showing you the door.Marco Ciappelli reports from NAMM 2026 for ITSPmagazine, exploring the intersection of technology, creativity, and the humans who make music possible.__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlexander ChadwickPRS GuitarsRESOURCESLearn more about PRS GUITARS: https://prsguitars.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSNAMM 2026, PRS Guitars, John Mayer Silver Sky, Ed Sheeran guitar, PRS Wild Blue, baritone guitar, guitar gear, new guitars 2026, PRS limited edition, guitar innovation, NAMM Show, musician interviews Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 220CES 2026 Recap | AI, Robotics, Quantum, And Renewable Energy: The Future Is More Practical Than You Think | A Conversation with CTA Senior Director and Futurist Brian Comiskey | Redefining Society and Technology with Marco Ciappelli
CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 219CES 2026: Why NVIDIA's Jensen Huang Won IEEE Medal of Honor | A Conversation with Mary Ellen Randall, IEEE's 2026 President and CEO | Redefining Society and Technology with Marco Ciappelli
Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 218Nothing Has Changed in Cybersecurity Since the 80s — And That's the Real Problem | A Conversation with Steve Mancini | Redefining Society and Technology with Marco Ciappelli
Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 217Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco Ciappelli
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco CiappelliKate O'Neill: https://www.koinsights.com/books/what-matters-next-book/Marco Ciappelli: https://www.marcociappelli.com/ When Kate O'Neill tells me that AI's most statistically probable outcome is actually its least meaningful one, I realize we're talking about something information theory has known for decades - but nobody's applying to the way we're using ChatGPT.She's a linguist who became a tech pioneer, one of Netflix's first hundred employees, someone who saw the first graphical web browser and got chills knowing everything was about to change. Her new book "What Matters Next" isn't another panic piece about AI or a blind celebration of automation. It's asking the question nobody seems to want to answer: what happens when we optimize for probability instead of meaning?I've been wrestling with this myself. The more I use AI tools for content, analysis, brainstorming - the more I notice something's missing. The creativity isn't there. It's brilliant for summarization, execution, repetitive tasks. But there's a flatness to it, a regression to the mean that strips away the very thing that makes human communication worth having.Kate puts it plainly: "There is nothing more human than meaning-making. From semantic meaning all the way out to the philosophical, cosmic worldview - what matters and why we're here."Every time we hit "generate" and just accept what the algorithm produces, we're choosing efficiency over meaning. We're delegating the creative process to a system optimized for statistical likelihood, not significance.She laughs when I tell her about my own paradox - that AI sometimes takes MORE time, not less. There's this old developer concept called "yak shaving," where you spend ten times longer writing a program to automate five steps instead of just doing them. But the real insight isn't about time management. It's about understanding the relationship between our thoughts and the tools we use to express them.In her book "What Matters Next," Kate's message is that we need to stay in the loop. Use AI for ugly first drafts, sure. Let it expedite workflow. But keep going back and forth, inserting yourself, bringing meaning and purpose back into the process. Otherwise, we create what she calls "garbage that none of us want to exist in the world with."I wrote recently about the paradox of learning when we rely entirely on machines. If AI only knows what we've done in the past, and we don't inject new meaning into that loop, it becomes closed. It's like doomscrolling through algorithms that only feed you what you already like - you never discover anything new, never grow, never challenge yourself.We're living in a Hybrid Analog Digital Society where these tools are unavoidable and genuinely powerful. The question isn't whether to use them. It's how to use them in ways that amplify human creativity rather than flatten it, that enhance meaning rather than optimize it away.The dominant narrative right now is efficiency, productivity, automation. But what if the real value isn't doing things faster - it's doing things that actually matter? Technology should serve humanity's purpose. Not the other way around. And that purpose can't be dictated by algorithms trained on statistical likelihood. It has to come from us, from the messy, unpredictable, meaningful work of being human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 216AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society | Expert Panel Discussions With Marco Ciappelli & Sean Martin
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society🎙️ EXPERT PANEL Hosted By Marco Ciappelli & Sean MartinDr. Robert Pearl - Former CEO, Permanente Medical Group; Author, "ChatGPT, MD"Rob Havasy - Senior Director of Connected Health, HIMSSJohn Sapp Jr. - VP & CSO, Texas Mutual InsuranceJim StClair - VP of Public Health Systems, AltarumRobert Booker - Chief Strategy Officer, HITRUSTI had one of those conversations recently that reminded me why we do what we do at ITSPmagazine. Not the kind of polite, surface-level exchange you get at most industry events, but a real grappling with the contradictions and complexities that define our Hybrid Analog Digital Society.This wasn't just another panel discussion about AI in healthcare. This was a philosophical interrogation of who benefits, who pays, and who's at risk when we hand over diagnostic decisions, treatment protocols, and even the sacred physician-patient relationship to algorithms.The panel brought together some of the most thoughtful voices in healthcare technology: Dr. Robert Pearl, former CEO of the Permanente Medical Group and author of "ChatGPT, MD"; Rob Havasy from HIMSS; John Sapp from Texas Mutual Insurance; Jim StClair from Altarum; and Robert Booker from HITRUST. What emerged wasn't a simple narrative of technological progress or dystopian warning, but something far more nuanced—a recognition that we're navigating uncharted territory where the stakes couldn't be higher.Dr. Pearl opened with a stark reality: 400,000 people die annually from misdiagnoses in America. Another half million die because we fail to adequately control chronic diseases like hypertension and diabetes. These aren't abstract statistics—they're lives lost to human error, system failures, and the limitations of our current healthcare model. His argument was compelling: AI isn't replacing human judgment; it's filling gaps that human cognition simply cannot bridge alone.But here's where the conversation became truly fascinating. Rob Havasy described a phenomenon I've noticed across every technology adoption curve we've covered—the disconnect between leadership enthusiasm and frontline reality. Healthcare executives believe AI is revolutionizing their operations, while nurses and physicians on the floor are quietly subscribing to ChatGPT on their own because the "official" tools aren't ready yet. It's a microcosm of how innovation actually happens: messy, unauthorized, and driven by necessity rather than policy.The ethical dimensions run deeper than most people realize. When Marco—my co-host Sean Martin and I—asked about liability, the panel's answer was refreshingly honest: we don't know. The courts will eventually decide who's responsible when an AI diagnostic tool leads to harm. Is it the developer? The hospital? The physician who relied on the recommendation? Right now, everyone wants control over AI deployment but minimal liability for its failures. Sound familiar? It's the classic American pattern of innovation outpacing regulation.John Sapp introduced a phrase that crystallized the challenge: "enable the secure adoption and responsible use of AI." Not prevent. Not rush recklessly forward. But enable—with guardrails, governance, and a clear-eyed assessment of both benefits and risks. He emphasized that AI governance isn't fundamentally different from other technology risk management; it's just another category requiring visibility, validation, and informed decision-making.Yet Robert Booker raised a question that haunts me: what do we really mean when we talk about AI in healthcare? Are we discussing tools that empower physicians to provide better care? Or are we talking about operational efficiency mechanisms designed to reduce costs, potentially at the expense of the human relationship that defines good medicine?This is where our Hybrid Analog Digital Society reveals its fundamental tensions. We want the personalization that AI promises—real-time analysis of wearable health data, pharmacogenetic insights tailored to individual patients, early detection of deteriorating conditions before they become crises. But we're also profoundly uncomfortable with the idea of an algorithm replacing the human judgment, intuition, and empathy that we associate with healing.Jim StClair made a provocative observation: AI forces us to confront the uncomfortable truth about how much of medical practice is actually procedure, protocol, and process rather than art. How many ER diagnoses follow predictable decision trees? How many prescriptions are essentially formulaic responses to common presentations? Perhaps AI isn't threatening the humanity of medicine—it's revealing how much of medicine has always been mechanical, freeing clinicians to focus on the parts that genuinely require human connection.The panel consensus, if there was one, centered on governance. Not as bureaucratic obstruction, but as the framework that allows us to experiment responsib
Ep 215New Event | Global Space Awards 2025 Honors Captain James Lovell Legacy at Natural History Museum London | A conversation with Sanjeev Gordhan | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Event | Global Space Awards 2025 Honors Captain James Lovell Legacy at Natural History Museum London | A conversation with Sanjeev Gordhan | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Sanjeev GordhanGeneral Partner @ Type One Ventures | Space, Deep-Tech, StrategyOn LinkedIn: https://www.linkedin.com/in/sanjeev-gordhan-3714b327/____________Short Introduction The inaugural Global Space Awards celebrates the Golden Era of Space on December 5, 2025, at London's Natural History Museum. Hosted by physicist Brian Greene, the event honors Captain James Lovell's legacy and recognizes innovators transforming space from government domain to commercial frontier in our Hybrid Analog Digital Society.____________Article "There are people who make things happen, there are people who watch things happen, and there are people who wonder what happened. To be successful, you need to be a person who makes things happen."Those words from Captain James Lovell defined his life—from commanding Apollo 13's near-disastrous mission to inspiring generations of space explorers. This December, London's Natural History Museum will host the inaugural Global Space Awards, an event dedicating its first evening to Lovell's extraordinary legacy while celebrating those making things happen in space today.Sanjeev Gordhan, General Partner at Type One Ventures and part of the Global Space Awards organizing team, joined me to discuss why this moment matters. Not just for space enthusiasts, but for everyone whose lives are being transformed by technologies developed beyond Earth's atmosphere."Space is not a sector," Sanj explained. "It's a domain that overrides many sectors—agriculture, pharmaceuticals, defense, telecommunications, connectivity. Things we engage with daily."The timing couldn't be more significant. We're witnessing what Sanj calls a fundamental shift in space economics. In the 1970s and 80s, launching a kilogram into space cost $70,000-$80,000. Today? Around $3,000. That 20x reduction has transformed space from an exclusive government playground into a commercially viable domain where startups can reach orbit on seed funding.This democratization of space access is precisely why the Global Space Awards emerged. The industry needed something beyond its echo chambers—a red-carpet moment celebrating excellence across the entire spectrum, from research laboratories to scaling businesses, from breakthrough science to sustainable investments.The response exceeded all expectations. The first-year event received 516 nominations from 38 countries. Sanj and his team were "gobsmacked"—they'd hoped for maybe 150-200. The overwhelming engagement proved what they suspected: the space community was hungry for recognition that spans the complete journey from laboratory to commercial impact.What makes this particularly fascinating is how space technology circles back to solve Earth's problems. Consider pharmaceuticals: crystallization processes in microgravity create flawless crystal structures impossible to achieve on Earth. The impact? Chemotherapy treatments that currently require hours-long hospital visits could become subcutaneous injections patients self-administer at home. That's not science fiction—that's research happening now on the International Space Station, waiting for commercial space infrastructure to scale production.Or agriculture: Earth observation satellites help farmers optimize crop yields, manage water resources, and predict harvests with unprecedented accuracy. Space technology feeding humanity—literally.The investment mathematics are compelling. For every dollar invested in space innovation, the return to humanity measures around 20x. Not in stock market terms, but in solving problems like food security, medical treatments, climate monitoring, and global connectivity. These aren't abstract future benefits—they're happening now, accelerating as launch costs plummet and commercial operations expand.The Global Space Awards recognizes this multifaceted reality through eight distinct categories: Playmaker of the Year, Super Scaler, Space Investor, Partnership of the Year, Innovation Breakthrough, Science Breakthrough, Sustai
Ep 214New Book | STREAMING WARS: How Getting Everything We Want Changed Entertainment Forever | Journalist Charlotte Henry Explains How Streaming Changed Entertainment Forever | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book | STREAMING WARS: How Getting Everything We Want Changed Entertainment Forever | Journalist Charlotte Henry Explains How Streaming Changed Entertainment Forever | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Charlotte HenryAuthor, journalist, broadcaster who created and runs The Addition newsletter looking at the crossover between media and tech.The Media Society https://theaddition.substack.com/On LinkedIn: https://www.linkedin.com/in/charlotteahenry/____________Short Introduction Journalist Charlotte Henry reveals how streaming transformed entertainment in her new book "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever." From Netflix's rise to the 2023 Hollywood strikes, she examines how we consume media, express ourselves, and the surprising return to "old-fashioned" weekly releases in our Hybrid Analog Digital Society.____________Article We used to learn who someone was by looking at their record collection. Walk into their home, scan the vinyl on the shelves, and you'd know—this person loves Metallica, that person's into jazz, someone else collected every Beatles album ever pressed. Media was how we expressed ourselves, how we told our story without saying a word.That's gone now. And we might not have noticed it disappearing.Charlotte Henry, a London-based journalist and author of "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever," sat down with me to discuss something most of us experience daily but rarely examine deeply: how streaming has fundamentally altered not just entertainment, but how we relate to media and each other."You can't pop over to someone's house after a first date and see their Spotify playlist," Charlotte pointed out. She's right—you can't browse someone's Netflix queue the way you could their DVD collection, can't judge their Kindle library the way you could scan their bookshelf. We've lost that intimate form of self-expression, that casual cultural reveal that came from physical media.But Charlotte's book isn't a nostalgic lament. It's something far more valuable: a snapshot of this exact moment in media history, a line in the sand marking where we are before everything changes again. And in technology and media, change is the only constant.Her starting point is deliberate—the 2023 Hollywood strikes. Not the beginning of streaming's story, but perhaps its most symbolic moment. Writers, actors, costume designers, transportation crews, everyone who keeps Hollywood running stood up and said: this isn't working. The frustrations that exploded that summer had been building for years, all stemming from how streaming fundamentally disrupted the entertainment economy.My wife works in Hollywood's costume department. She lived through those strikes, felt the direct impact of an industry transformed. The changes Charlotte documents aren't abstract—they're affecting real careers, real livelihoods, real creative work.What struck me most about our conversation was how Charlotte brings together all of streaming—not just Netflix and Disney+, but Twitch, Spotify, Apple Music, the specialized services for heavy metal or horror movies, the entire ecosystem of on-demand media. No one had told this complete story before, and it needed telling precisely because it's changing so rapidly.Consider this: streaming is both revolutionary and circular. We cut the cord, abandoned cable packages, embraced freedom of choice. But now? The streaming services are rebundling themselves into packages that look suspiciously like the cable bundles we rejected. We've come full circle, just with different branding.The same thing is happening with release schedules. Remember when Netflix revolutionized everything by dropping entire seasons at once? Binge-watching became our cultural norm. But now services are reverting to weekly releases—Stranger Things spread across quarters to ensure multiple subscription payments, Apple TV+ releasing shows one episode per week like it's 1995. We're going back to the future.Charlotte's analysis of the consumer psychology is fascinating. We've been trained to expect everything, everywhere, immediately. Not just TV shows—be
Ep 213New Book: SPIES, LIES, AND CYBER CRIME | Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book: SPIES, LIES, AND CYBER CRIME | Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Eric O'NeillKeynote Speaker, Cybersecurity Expert, Spy Hunter, Bestselling Author. AttorneyOn LinkedIn: https://www.linkedin.com/in/eric-m-oneill/Find the book on Eric Website: https://ericoneill.netSean Martin, CISSPGTM Advisor | Journalist, Analyst, Technologist | Cybersecurity, Risk, Operations | Brand & Content Marketing | Musician, Photographer, Professor, Moderator | Co-Founder, ITSPmagazine & Studio C60Sean Martin, Co-Founder, ITSPmagazine and Studio C60 Website: https://www.seanmartin.com ____________Short Introduction Former FBI counterintelligence specialist Eric O'Neill, who caught the most damaging spy in US history, reveals how cyber criminals use traditional espionage techniques to attack us. In his new book "Spies Lies and Cyber Crime," he exposes the $14 trillion cybercrime industry and teaches us to recognize attacks in our Hybrid Analog Digital Society. ____________Article Trust has become the rarest commodity on Earth. We can't trust what we see, what we hear, or what we read anymore. And the people exploiting that crisis? They learned their craft from spies.Eric O'Neill knows this better than most. He's the former FBI counterintelligence specialist who went undercover—as himself—to catch Robert Hanssen, Russia's top spy embedded in the FBI for 22 years. That story became his first book "Gray Day" and the movie "Breach." But five years later, Eric's back with a very different kind of warning.His new book "Spies Lies and Cyber Crime" isn't another spy memoir. It's a field manual for surviving in a world where criminal syndicates have weaponized traditional espionage techniques against every single one of us. And business is booming—to the tune of $14 trillion annually, making cybercrime the third largest economy on Earth, bigger than Japan and Germany combined."They're not attacking our computers," Eric told me during our conversation. "They're attacking you and me personally. They're fooling us into just handing everything over."The pandemic accelerated everything. We were thrown into a completely virtual environment before security was ready, and that moment marks the biggest single rise of cybercrime in history. While most of us were stuck at home adjusting to Zoom calls, cyber criminals were innovating faster than anyone else, studying how we communicate, work, and associate in digital spaces.Here's what makes Eric's perspective invaluable: he understands both sides of this war. He spent his FBI career using traditional counterintelligence techniques—deception, impersonation, infiltration, confidence schemes, exploitation, and destruction—to catch spies. Now he watches cyber criminals deploy those exact same tactics against us through our screens.The top cybercrime gangs have actually hired active intelligence officers from countries like Russia, China, and Iran. These spies moonlight as cyber criminals, bringing state-level tradecraft to street-level scams. It's sophisticated, organized, and shockingly effective.Consider the romance scam Eric describes in the book: a widowed grandfather receives a simple text saying "Hey." Being polite, he responds "Sorry, wrong number." That single response marks him as a target. Over weeks, a "friendship" develops. His new best friend chats with him daily, learns his hopes and dreams, then introduces him to an "investment opportunity."Within months, the grandfather has invested his entire pension—hundreds of thousands of dollars—into what looks like a legitimate cryptocurrency platform with secure logins and rising account values. When he tries to withdraw money for a family vacation, his friend vanishes. The company doesn't exist. The website was a dummy. Everything is gone.That's not a quick phishing scam—that's a confidence scheme straight from the spy playbook, adapted for our Hybrid Analog Digital Society where we live in little boxes on screens, increasingly disconnected from physical reality.The sophistication extends to ransomware operations. These aren't

Ep 212Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco Ciappelli
Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Empty shelves trigger something primal in us now. We've lived through the panic, the uncertainty, the realization that our food supply isn't as secure as we thought. Amberley Brady hasn't forgotten that feeling, and she's turned it into action.Speaking with her from Florence to Sydney ahead of AISA CyberCon in Melbourne, I discovered someone who came to cybersecurity through an unexpected path—studying law, working in policy, but driven by a singular passion for food security. When COVID-19 hit Australia in 2019 and grocery store shelves emptied, Amberley couldn't shake the question: what happens if this keeps happening?Her answer was to build realfoodprice.com.au, a platform tracking food pricing transparency across Australia's supply chain. It's based on the Hungarian model, which within three months saved consumers 50 million euros simply by making prices visible from farmer to wholesaler to consumer. The markup disappeared almost overnight when transparency arrived."Once you demonstrate transparency along the supply chain, you see where the markup is," Amberley explained. She gave me an example that hit home: watermelon farmers were getting paid 40 cents per kilo while their production costs ran between $1.00 to $1.50. Meanwhile, consumers paid $2.50 to $2.99 year-round. Someone in the middle was profiting while farmers lost money on every harvest.But this isn't just about fair pricing—it's about critical infrastructure that nobody's protecting. Australia produces food for 70 million people, far more than its own population needs. That food moves through systems, across borders, through supply chains that depend entirely on technology most farmers never think about in cybersecurity terms.The new autonomous tractors collecting soil data? That information goes somewhere. The sensors monitoring crop conditions? Those connect to systems someone else controls. China recognized this vulnerability years ago—with 20% of the world's population but only 7% of arable land, they understood that food security is national security.At CyberCon, Amberley is presenting two sessions that challenge the cybersecurity community to expand their thinking. "Don't Outsource Your Thinking" tackles what she calls "complacency creep"—our growing trust in AI that makes us stop questioning, stop analyzing with our gut instinct. She argues for an Essential Nine in Australia's cybersecurity framework, adding the human firewall to the technical Essential Eight.Her second talk, cheekily titled "Everyone is Protecting My Password, But No One's Protecting My Toilet Paper," addresses food security directly. It's provocative, but that's the point. We saw what happened in Japan recently with the rice crisis—the same panic buying, the same distrust, the same empty shelves that COVID taught us to fear."We will run to the store," Amberley said. "That's going to be human behavior because we've lived through that time." And here's the cybersecurity angle: those panics can be manufactured. A fake image of empty shelves, an AI-generated video, strategic disinformation—all it takes is triggering that collective memory.Amberley describes herself as an early disruptor in the agritech cybersecurity space, and she's right. Most cybersecurity professionals think about hospitals, utilities, financial systems. They don't think about the autonomous vehicles in fields, the sensor networks in soil, the supply chain software moving food across continents.But she's starting the conversation, and CyberCon's audience—increasingly diverse, including people from HR, risk management, and policy—is ready for it. Because at the end of the day, everyone has to eat. And if we don't start thinking about the cyber vulnerabilities in how we grow, move, and price food, we're leaving our most basic need unprotected.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazineGUEST:Amberley Brady, Food Security & Cybersecurity Advocate, Founder of realfoodprice.com.au | On LinkedIn: https://www.linkedin.com/in/amberley-b-a62022353/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 211Beyond Blame: Navigating the Digital World with Our Kids - Interview with Jacqueline (JJ) Jayne | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco Ciappelli
Beyond Blame: Navigating the Digital World with Our KidsAISA CyberCon Melbourne | October 15-17, 2025There's something fundamentally broken in how we approach online safety for young people. We're quick to point fingers—at tech companies, at schools, at kids themselves—but Jacqueline Jayne (JJ) wants to change that conversation entirely.Speaking with her from Florence while she prepared for her session at AISA CyberCon Melbourne this week, it became clear that JJ understands what many in the cybersecurity world miss: this isn't a technical problem that needs a technical solution. It's a human problem that requires us to look in the mirror."The online world reflects what we've built for them," JJ told me, referring to our generation. "Now we need to step up and help fix it."Her session, "Beyond Blame: Keeping Our Kids Safe Online," tackles something most cybersecurity professionals avoid—the uncomfortable truth that being an IT expert doesn't automatically make you equipped to protect the young people in your life. Last year's presentation at Cyber Con drew a full house, with nearly every hand raised when she asked who came because of a kid in their world.That's the fascinating contradiction JJ exposes: rooms full of cybersecurity professionals who secure networks and defend against sophisticated attacks, yet find themselves lost when their own children navigate TikTok, Roblox, or encrypted messaging apps.The timing couldn't be more relevant. With Australia implementing a social media ban for anyone under 16 starting December 10, 2025, and similar restrictions appearing globally, parents and carers face unprecedented challenges. But as JJ points out, banning isn't understanding, and restriction isn't education.One revelation from our conversation particularly struck me—the hidden language of emojis. What seems innocent to adults carries entirely different meanings across demographics, from teenage subcultures to, disturbingly, predatory networks online. An explosion emoji doesn't just mean "boom" anymore. Context matters, and most adults are speaking a different digital dialect than their kids.JJ, who successfully guided her now 19-year-old son through the gaming and social media years, isn't offering simple solutions because there aren't any. What she provides instead are conversation starters, resources tailored to different age groups, and even AI prompts that parents can customize for their specific situations.The session reflects a broader shift happening at events like Cyber Con. It's no longer just IT professionals in the room. HR representatives, risk managers, educators, and parents are showing up because they've realized that digital safety doesn't respect departmental boundaries or professional expertise."We were analog brains in a digital world," JJ said, capturing our generational position perfectly. But today's kids? They're born into this interconnectedness, and COVID accelerated everything to a point where taking it away isn't an option.The real question isn't who to blame. It's what role each of us plays in creating a safer digital environment. And that's a conversation worth having—whether you're at the Convention and Exhibition Center in Melbourne this week or joining virtually from anywhere else.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazine___________GUEST:Jacqueline (JJ) Jayne, Reducing human error in cyber and teaching 1 million people online safety. On Linkedin: https://www.linkedin.com/in/jacquelinejayne/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 210AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.com _____Short Introduction: Dr. Maya Ackerman, AI researcher and author of "Creative Machines: AI, Art, and Us," challenges our assumptions about artificial intelligence and creativity. She argues that ChatGPT is intentionally limited, that hallucinations are features not bugs, and that we must stop treating AI as an all-knowing oracle in our Hybrid Analog Digital Society._____Article Dr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ We talk about AI hallucinations like they're bugs that need fixing. Glitches in the matrix. Errors to be eliminated. But what if we've got it completely backward?Dr. Maya Ackerman sat in front of her piano—a detail that matters more than you'd think—and told me something that made me question everything I thought I understood about artificial intelligence and creativity. The AI we use every day, the ChatGPT that millions rely on for everything from writing emails to generating ideas, is intentionally held back from being truly creative.Let that sink in for a moment. ChatGPT, the tool millions use daily, is designed to be convergent rather than divergent. It's built to replace search engines, to give us "correct" answers, to be an all-knowing oracle. And that's exactly the problem.Maya's journey into this field began ten years ago, long before generative AI became the buzzword du jour. Back in 2015, she made what her employer called a "risky decision"—switching her research focus to computational creativity, the academic precursor to what we now call generative AI. By 2017, she'd launched one of the earliest generative AI startups, WaveAI, helping people write songs. Investors told her the whole direction didn't make sense. Then came late 2022, and suddenly everyone understood.What fascinates me about Maya's perspective is how she frames AI as humanity's collective consciousness made manifest. We wrote, we created the printing press, we built the internet, we filled it with our knowledge and our forums and our social media—and then we created a functioning brain from it. As she puts it, we can now talk with humanity's collective consciousness, including what Carl Jung called the collective shadow—both the brilliance and the biases.This is where our c

Ep 209Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative Liberation | Analog Minds in a Digital World: Part 2 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationI've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of micro-storytellers who've mastered the art of the ultra-short form, sometimes even adding Super 8-style filters to their
Ep 208AI Will Replace Democracy: The Future of Government is Here. Or, is it? Let's discuss! | A Conversation with Eli Lopian | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ Technology entrepreneur Eli Lopian joins Marco to explore "AI-cracy" - a revolutionary governance model where artificial intelligence writes laws based on abundance metrics while humans retain judgment. This fascinating conversation examines how we might transition from broken democratic systems to AI-assisted governance in our evolving Hybrid Analog Digital Society.Picture this scenario: you're sitting in a pub with friends, listening to them argue about which political rally to attend, and suddenly you realize something profound. As Eli told me, it's like watching people fight over which side of the train to sit on while the train itself is heading in completely the wrong direction. That metaphor perfectly captures where we are with democracy today.Eli's background fascinates me - breaking free from a religious upbringing at 16, building a successful AI startup for the past decade, and now proposing something that sounds like science fiction but feels increasingly inevitable. His central premise stopped me in my tracks: no human being should be allowed to write laws anymore. Only AI should create legislation, guided by what he calls an "abundance metric" - essentially optimizing for human happiness, freedom, and societal wellbeing.But here's where it gets really interesting. Eli isn't proposing we hand over control to a single AI overlord. Instead, he envisions three separate AI systems - one controlled by the government, one by the opposition, and one by an NGO - all working with the same data but operated by different groups. They must reach identical conclusions for any law to proceed. If they disagree, human experts investigate why.What struck me most was how this could actually restore direct democracy. In ancient Athens, every citizen participated in the polis. We can't do that with hundreds of millions of people, but AI could process everyone's input instantly. Imagine submitting your policy ideas directly to an AI system that responds within hours, explaining why your suggestion would or wouldn't improve societal abundance. It's like having the Athenian square scaled to modern complexity.The safeguards Eli proposes reveal his deep understanding of human nature. No AI can judge humans - that remains strictly a human responsibility. Citizens don't vote for charismatic politicians anymore; they vote for actual policies. Every three years, people choose their preferred policies. Every decade, they set ambitious collective goals - cure cancer, reach Mars, whatever captures society's imagination.Living in our Hybrid Analog Digital Society, we already see AI creeping into governance. Lawyers use AI, governments employ algorithms for efficiency, and citizens increasingly turn to ChatGPT for advice they once sought from doctors or therapists. Eli's insight is that we're heading toward AI governance whether we plan it or not - so why not design it properly from the start?His most compelling point addresses a fear I share: that AI lacks creativity. Eli argues this is actually a feature, not a bug. AI generates rather than truly creates. The creative spark - proposing that universal basic income experiment, suggesting we test new social policies, imagining those decade-long goals - that remains uniquely human. AI simply processes our creativity faster and more fairly than our current broken systems.The privacy question loomed large in our conversation. Eli proposes a brilliant separation: your personal AI mentor (helping y

Ep 207We Have All the Information, So Why Do We Know Less? | Analog Minds in a Digital World: Part 1 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3We Have All the Information, So Why Do We Know Less?Introducing: Reflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid analog-digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.But with the explosion of generative AI, this hybrid reality isn't just a philosophical concept anymore—it's our lived experience. Every day, we navigate between analog intuition and digital efficiency, between human wisdom and machine intelligence, between the messy beauty of physical presence and the seductive convenience of virtual interaction.This newsletter series will explore the tensions, paradoxes, and possibilities of being fundamentally analog beings in an increasingly digital world. We're not just using technology; we're being reshaped by it while simultaneously reshaping it with our deeply human, analog sensibilities.Analog Minds in a Digital World: Part 1We Have All the Information, So Why Do We Know Less?I was thinking about my old set of encyclopedias the other day. You know, those heavy volumes that sat on shelves like silent guardians of knowledge, waiting for someone curious enough to crack them open. When I needed to write a school report on, say, the Roman Empire, I'd pull out Volume R and start reading.But here's the thing: I never just read about Rome.I'd get distracted by Romania, stumble across something about Renaissance art, flip backward to find out more about the Reformation. By the time I found what I was originally looking for, I'd accidentally learned about three other civilizations, two art movements, and the invention of the printing press. The journey was messy, inefficient, and absolutely essential.And if I was in a library... well then just imagine the possibilities.Today, I ask Google, Claude or ChatGPT about the Roman Empire, and in thirty seconds, I have a perfectly formatted, comprehensive overview that would have taken me hours to compile from those dusty volumes. It's accurate, complete, and utterly forgettable.We have access to more information than any generation in human history. Every fact, every study, every perspective is literally at our fingertips. Yet somehow, we seem to know less. Not in terms of data acquisition—we're phenomenal at that—but in terms of deep understanding, contextual knowledge, and what I call "accidental wisdom."The difference isn't just about efficiency. It's about the fundamental way our minds process and retain information. When you physically search through an encyclopedia, your brain creates what cognitive scientists call "elaborative encoding"—you remember not just the facts, but the context of finding them, the related information you encountered, the physical act of discovery itself.When AI gives us instant answers, we bypass this entire cognitive process. We get the conclusion without the journey, the destination without the map. It's like being teleported to Rome without seeing the countryside along the way—technically efficient, but something essential is lost in translation.This isn't nostalgia talking. I use AI daily for research, writing, and problem-solving. It's an incredible tool. But I've noticed something troubling: my tolerance for not knowing things immediately has disappeared. The patience required for deep learning—the kind that happens when you sit with confusion, follow tangents, make unexpected connections—is atrophying like an unused muscle.We're creating a generation of analog minds trying to function in a digital reality that prioritizes speed over depth, answers over questions, conclusions over curiosity. And in doing so, we might be outsourcing the very process that makes us wise.Ancient Greeks had a concept called "metis"—practical wisdom that comes from experience, pattern recognition, and intuitive understanding developed through continuous engagement with complexity. In Ancient Greek, metis (Μῆτις) means wisdom, skill, or craft, and it also describes a form of wily, cunning intelligence. It can refer to the pre-Olympian goddess of wisdom and counsel, who was the first wi
Ep 206Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Jeff Burningham Tech Entrepreneur. Investor. National Best Selling Author. Explorer of Human Potential. My book #TheLastBookWrittenByAHuman is available now.On LinkedIn: https://www.linkedin.com/in/jeff-burningham-15a01a7b/Book: https://www.simonandschuster.com/books/The-Last-Book-Written-by-a-Human/Jeff-Burningham/9781637634561#:~:text=*%20Why%20the%20development%20of%20AI,in%20the%20age%20of%20AI.Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Entrepreneur and author Jeff Burningham explores how artificial intelligence serves as a cosmic mirror reflecting humanity's true nature. Through his book "The Last Book Written by a Human," he argues that as machines become more intelligent, humans must become wiser. This conversation examines our collective journey through disruption, reflection, transformation, and evolution in our Hybrid Analog Digital Society.⸻ Article ⸻ I had one of those conversations that made me pause and question everything I thought I knew about our relationship with technology. Jeff Burningham, serial entrepreneur and author of "The Last Book Written by a Human: Becoming Wise in the Age of AI," joined me to explore a perspective that's both unsettling and profoundly hopeful.What struck me most wasn't Jeff's impressive background—founding multiple tech companies, running for governor of Utah, building a $5 billion real estate empire. It was his spiritual awakening in Varanasi, India, where a voice in his head insisted he was a writer. That moment of disruption led to years of reflection and ultimately to a book that challenges us to see AI not as our replacement, but as our mirror."As our machines become more intelligent, our work as humans is to become more wise," Jeff told me. This isn't just a catchy phrase—it's the thesis of his entire work. He argues that AI functions as what he calls a "cosmic mirror to humanity," reflecting back to us exactly who we've become as a species. The question becomes: do we like what we see?This perspective resonates deeply with how we exist in our Hybrid Analog Digital Society. We're no longer living separate digital and physical lives—we're constantly navigating both realms simultaneously. AI doesn't just consume our data; it reflects our collective behaviors, biases, and beliefs back to us in increasingly sophisticated ways.Jeff structures his thinking around four phases that mirror both technological development and personal growth: disruption, reflection, transformation, and evolution. We're currently somewhere between reflection and transformation, he suggests, at a crucial juncture where we must choose between two games. The old game prioritizes cash as currency, power as motivation, and control as purpose. The new game he envisions centers on karma as currency, authenticity as motivation, and love as purpose.What fascinates me is how this connects to the hero's journey—the narrative structure underlying every meaningful story from Star Wars to our own personal transformations. Jeff sees AI's emergence as part of an inevitable journey, a necessary disruption that forces us to confront fundamental questions about consciousness, creativity, and what makes us human.But here's where it gets both beautiful and challenging: as machines handle more of our "doing," we're left with our "being." We're human beings, not human doings, as Jeff reminds us. This shift demands that we reconnect with our bodies, our wisdom, our imperfections—all the messy, beautiful aspects of humanity that AI cannot replicate.The conversation reminded me why I chose "Redefining" for this podcast's title. We're not just adapting to new technology; we're fundamentally reexamining what it means to be human in an age of artificial intelligence. This isn't about finding the easy button or achieving perfect efficiency—it's about embracing what makes us gloriously, imperfectly human.Jeff's book launches August 19th, and while it won't literally be the last book written by a human, the title serves as both warning and invitation. If we don't actively choos

Ep 205The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital Future | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. Bu
Ep 204From Broadcasting to AI Agents: Mark Smith on Technology's 100-Year Evolution at IBC 2025 Amsterdam | On Location Event Coverage Podcast With Sean Martin & Marco Ciappelli
I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next.Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content.The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression.What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever.The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession?This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter.But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity.This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated.This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity.What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology.The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools.As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories.IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media te

Ep 203Teaser | Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli | Read by Tape3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.______Listen to the Full Episodehttps://redefiningsocietyandtechnologypodcast.com/episodes/why-electric-vehicles-need-an-apollo-program-the-renweable-energy-infrastructure-reality-were-ignoring-a-conversation-with-mats-larsson-redefining-society-and-technology-podcast-with-marco-ciappelli__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ep 202Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.⸻ Article ⸻ When Reality Meets Electric Dreams: Lessons from the Apollo MindsetI had one of those conversations that stops you in your tracks. Mats Larsson, calling in from Stockholm while I connected from Italy, delivered a perspective on electric vehicles that shattered my comfortable assumptions about our technological transition."First of all, we need to admit that we do not know exactly how to build the future. And then we need to start building it." This wasn't just Mats being philosophical—it was a fundamental admission that our approach to electrification has been dangerously naive.We've been treating the electric vehicle transition like upgrading our smartphones—expecting it to happen seamlessly, almost magically, while we go about our daily lives. But as Mats explained, referencing the Apollo program, monumental technological shifts require something we've forgotten how to do: comprehensive, sustained, coordinated investment in infrastructure we can't even fully envision yet.The numbers are staggering. To electrify all US transportation, we'd need to double power generation—that's the equivalent of 360 nuclear reactors worth of electricity. For hydrogen? Triple it. While Tesla and Chinese manufacturers gained their decade-plus advantage through relentless investment cycles, traditional automakers treated electric vehicles as "defensive moves," showcasing capability without commitment.But here's what struck me most: we need entirely new competencies. "Electrification strategists and electrification architects," as Mats called them—professionals who can design power grids capable of charging thousands of logistics vehicles daily, infrastructure that doesn't exist in our current planning vocabulary.We're living in this fascinating paradox of our Hybrid Analog Digital Society. We've become so accustomed to frictionless technological evolution—download an update, get new features—that we've lost appreciation for transitions requiring fundamental systemic change. Electric vehicles aren't just different cars; they're a complete reimagining of energy distribution, urban planning, and even our relationship with mobility itself.This conversation reminded me why I love exploring the intersection of technology and society. It's not enough to build better batteries or faster chargers. We're redesigning civilization's transportation nervous system, and we're doing it while pretending it's just another product launch.What excites me isn't just the technological challenge—it's the human coordination required. Like the Apollo program, this demands that rare combination of visionary leadership, sustained investment, and public will that transcends political cycles and market quarters.Listen to my full conversation with Mats, and let me know: Are we ready to embrace the Apollo mindset for our electric future?Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ Electric Vehicles, Technology And Society, Infrastructure, Innovation, Sustainable Transport, electric vehicles, society and technology, infrastructure development, apollo program, energy transition, government investment, technological transformation, sustainable mo

Ep 201The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exa

Ep 200The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a s
Ep 199Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human Creativity | A Conversation with Maury Rogow | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I’ve had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn’t feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I’ll write it myself.” But even in that frustration, something creative happened. That tension? It’s real. It’s generative. And it’s something Maury deeply understands.“Companies don’t know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That’s when they fail.”Now that AI is democratizing storytelling tools, the danger isn’t that no one can create—it’s that everyone’s content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It’s like the writer that’s tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We’re not just talking about job displacement. We’re talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there’s hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that’s the spark AI can’t replicate. That’s the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we’re the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen
Ep 198How to Hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of the book: “From Ideas to Impact” | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: How to hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of “From Ideas to Impact”Guest: Michael SheldrickCo-Founder, Global Citizen | Author of “From Ideas to Impact” (Wiley 2024) | Professor, Columbia University | Speaker, Board Member and Forbes.com ContributorWebSite: https://michaelsheldrick.comOn LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Global Citizen: https://www.globalcitizen.org/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Michael Sheldrick returns to Redefining Society and Technology to share how Global Citizen has mobilized billions in aid and inspired millions through music, tech, and collective action. From social media activism to systemic change, this conversation explores how creativity and innovation can fuel a global movement for good.⸻ Article ⸻ Sometimes, the best stories are the ones that keep unfolding — and Michael Sheldrick’s journey is exactly that. When we first spoke, Global Citizen had just (almost) released their book From Ideas to Impact. This time, I invited Michael back on Redefining Society and Technology because his story didn’t stop at the last chapter.From a high school student in Western Australia who doubted his own potential, to co-founding one of the most influential global advocacy movements — Michael’s path is a testament to what belief and purpose can spark. And when purpose is paired with music, technology, and strategic activism? That’s where the real magic happens.In this episode, we dig into how Global Citizen took the power of pop culture and built a model for global change. Picture this: a concert ticket you don’t buy, but earn by taking action. Signing petitions, tweeting for change, amplifying causes — that’s the currency. It’s simple, smart, and deeply human.Michael shared how artists like John Legend and Coldplay joined their mission not just to play music, but to move policy. And they did — unlocking over $40 billion in commitments, impacting a billion lives. That’s not just influence. That’s impact.We also talked about the role of technology. AI, translation tools, Salesforce dashboards, even Substack — they’re not just part of the story, they’re the infrastructure. From grant-writing to movement-building, Global Citizen’s success is proof that the right tools in the right hands can scale change fast.Most of all, I loved hearing how digital actions — even small ones — ripple out globally. A girl in Shanghai watching a livestream. A father in Utah supporting his daughters’ activism. The digital isn’t just real — it’s redefining what real means.As we wrapped, Michael teased a new bonus chapter he’s releasing, The Innovator. Naturally, I asked him back when it drops. Because this conversation isn’t just about what’s been done — it’s about what comes next.So if you’re wondering where to start, just remember Eleanor Roosevelt’s quote Michael brought back:“The way to begin is to begin.”Download the app. Take one action. The world is listening.Cheers,Marco⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, tech innovation, digital transformation, tech, technology, Global Citizen, Michael Sheldrick, ending poverty, pop culture activism, technology for good, social impact, digital advocacy, Redefining Society, AI in nonprofits, youth engagement, music and change, activism app, social movements, John Legend, sustainable development, global action, climate change, eradicating polio, tech for humanity, podcast on technology__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube

Ep 197The Hybrid Species — When Technology Becomes Human, and Humans Become Technology | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov’s robot stories — a quiet, loyal machine who couldn’t speak, didn’t simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today’s machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don’t just co-exist, but merge?I didn’t pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn’t just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn’t come from biology at all? What if it comes from what we build?I’ve always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It’s not just science fiction. Philosopher Andy Clark — whose work I’ve followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won’t “use” AI. She’ll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won’t be a device she picks up — it’ll be a presence she grows up with.This isn’t some distant future. It’s already happening. And yet, I don’t believe we’re necessarily losing something. Not if we’re aware of what we’re merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It’s the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it’s not just about a machine becoming like us. It’s also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here’s the thing: I don’t think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it’s messier. I think it’s more beautiful than that.I think it’s about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we’ve always built to support us.Maybe evolution isn’t just natural selection anymore. Maybe it’s cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what’s still beautiful about being analog: the imperfections, the contradictions, the moments that don’t make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe —
Ep 196The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back Again | Guest: Abadesi Osunsade | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back AgainGuest: Abadesi OsunsadeFounder @ Hustle Crew WebSite: https://www.abadesi.comOn LinkedIn: https://www.linkedin.com/in/abadesi/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ What happens when someone with a multicultural worldview, startup grit, and a relentless focus on inclusion sits down to talk about tech, humanity, and the future? You get a conversation like this one with Abadesi Osunsade. We touched on everything from equitable design and storytelling to generative AI and ethics. This episode isn’t about answers — it’s about questions that matter. And it reminded me why I started this show in the first place. ⸻ Article ⸻ Some conversations remind you why you hit “record” in the first place. This one with Abadesi Osunsade — founder of Hustle Crew, podcast host of Techish, and longtime tech leader — was exactly that kind of moment. We were supposed to connect in person at Infosecurity Europe in London, but the chaos of the event kept us from it. I’m glad it worked out this way instead, because what came out of our remote chat was raw, layered, and deeply human. Abadesi and I explored a lot in just over 30 minutes: her journey through big tech and startups, the origins of Hustle Crew, and how inclusion and equity aren’t just HR buzzwords — they’re the foundation of better design. Better products. Better culture. We talked about the usual “why diversity matters” angle — but went beyond it. She shared viral real-world examples of flawed design (like facial recognition or hand dryers that don’t register dark skin) and challenged the myth that inclusive design is more expensive. Spoiler: it’s more expensive not to do it right the first time. Then we jumped into AI — not just how it’s being built, but who is building it. And what it means when those creators don’t reflect the world they’re supposedly designing for. We talked about generative AI, ethics, simulation, capitalism, utopia, dystopia — you know, the usual light stuff. What stood out most, though, was her reminder that this work — inclusion, education, change — isn’t about shame or guilt. It’s about possibility. Not everyone sees the world the same way, so you meet them where they are, with stories, with data, with empathy. And maybe, just maybe, you shift their perspective. This podcast was never meant to be just about tech. It’s about how tech shapes society — and how society, in turn, must shape tech. Abadesi brought that full circle. Take a listen. Think with us. Then go build something better. ⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, inclusive design, tech innovation, product development, digital transformation, tech, technology, Diversity & Inclusion, equity in tech, inclusive leadership, unconscious bias, diverse teams, representation matters, belonging at workEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Ep 195Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov’s famous robot tales.It’s strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn’t the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov’s quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn’t speak. He doesn’t pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the ’80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it’s an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie’s silence feels oddly honest. He doesn’t offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov’s fictional robot and today’s generative AI highlights a deeper societal tension. For decades, we’ve anthropomorphized our machines, giving them names, voices, personalities. We’ve designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we’ve begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn’t just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn’t on the horizon; it’s happening in real time.What fascinates me, listening to Robbie in this new context, is how much of
Ep 194Bridging Worlds: How Technology Connects — or Divides — Our Communities | Guest: Lawrence Eta | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com Title: Bridging Worlds: How Technology Connects — or Divides — Our Communities Guest: Lawrence Eta Global Digital AI Thought Leader | #1 International Best Selling Author | Keynote Speaker | TEDx Speaker | Multi-Sector Executive | Community & Smart Cities Advocate | Pioneering AI for Societal AdvancementWebSite: https://lawrenceeta.com On LinkedIn: https://www.linkedin.com/in/lawrence-eta-9b11139/ Host: Marco Ciappelli Co-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.com On LinkedIn: https://www.linkedin.com/in/marco-ciappelli/ _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ In this episode of Redefining Society and Technology, I sit down with Lawrence Eta — global technology leader, former CTO of the City of Toronto, and author of Bridging Worlds. We explore how technology, done right, can serve society, reduce inequality, and connect communities. From public broadband projects to building smart — sorry, connected — cities, Lawrence shares lessons from Toronto to Riyadh, and why tech is only as good as the values guiding it. ⸻ Article ⸻ As much as I love shiny gadgets, blinking lights, and funny noises from AI — we both know technology isn’t just about cool toys. It’s about people. It’s about society. It’s about building a better, more connected world. That’s exactly what we explore in my latest conversation on Redefining Society and Technology, where I had the pleasure of speaking with Lawrence Eta. If you don’t know Lawrence yet — let me tell you, this guy has lived the tech-for-good mission. Former Chief Technology Officer for the City of Toronto, current Head of Digital and Analytics for one of Saudi Arabia’s Vision 2030 mega projects, global tech consultant, public servant, author… basically, someone who’s been around the block when it comes to tech, society, and the messy, complicated intersection where they collide. We talked about everything from bridging the digital divide in one of North America’s most diverse cities to building entirely new digital infrastructure from scratch in Riyadh. But what stuck with me most is his belief — and mine — that technology is neutral. It’s how we use it that makes the difference. Lawrence shared his experience launching Toronto’s Municipal Broadband Network — a project that brought affordable, high-speed internet to underserved communities. For him, success wasn’t measured by quarterly profits (a refreshing concept, right?) but by whether kids could attend virtual classes, families could access healthcare online, or small businesses could thrive from home. We also got into the “smart city” conversation — and how even the language we use matters. In Toronto, they scrapped the “smart city” buzzword and reframed the work as building a “connected community.” It’s not about making the city smart — it’s about connecting people, making sure no one gets left behind, and yes, making technology human. Lawrence also shared his Five S principles for digital development: Stability, Scalability, Solutions (integration), Security, and Sustainability. Simple, clear, and — let’s be honest — badly needed in a world where tech changes faster than most cities can adapt. We wrapped the conversation with the big picture — how technology can be the great equalizer if we use it to bridge divides, not widen them. But that takes intentional leadership, community engagement, and a shared vision. It also takes reminding ourselves that beneath all the algorithms and fiber optic cables, we’re still human. And — as Lawrence put it beautifully — no matter where we come from, most of us want the same basic things: safety, opportunity, connection, and a better future for our families. That’s why I keep having these conversations — because the future isn’t just happening to us. We’re building it, together. If you missed the episode, I highly recommend listening — especially if you care about technology serving people, not the other way around. Links to connect with Lawrence and to the full episode are below — stay tuned for more, and let’s keep redefining society, together. ⸻ Keywords ⸻ Connected Communities, Smart Cities, Digital Divide, Public Broadband, Technology and Society, Digital Infrastructure, Technology for Good, Community Engagement, Urban Innovation, Digital Inclusion, Public-Private Partnerships, Tech LeadershipEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscri

Ep 193What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It’s Not a Human Problem | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It’s Not a Human ProblemA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliJune 6, 2025A Post-Infosecurity Europe Reflection on the Strange but Predictable Ways We’ve Spent Thirty Years Pretending Cybersecurity Isn’t About People.⸻ Once there was a movie titled “Young Frankenstein” (1974) — a black-and-white comedy directed by Mel Brooks, written with Gene Wilder, and starring Wilder and Marty Feldman, who delivers the iconic “What hump?” line.Let me describe the scene:[Train station, late at night. Thunder rumbles. Dr. Frederick Frankenstein steps off the train, greeted by a hunched figure holding a lantern — Igor.]Igor: Dr. Frankenstein?Dr. Frederick Frankenstein: It’s Franken-steen.Igor: Oh. Well, they told me it was Frankenstein.Dr. Frederick Frankenstein: I’m not a Frankenstein. I’m a Franken-steen.Igor (cheerfully): All right.Dr. Frederick Frankenstein (noticing Igor’s eyes): You must be Igor.Igor: No, it’s pronounced Eye-gor.Dr. Frederick Frankenstein (confused): But they told me it was Igor.Igor: Well, they were wrong then, weren’t they?[They begin walking toward the carriage.]Dr. Frederick Frankenstein (noticing Igor’s severe hunchback): You know… I’m a rather brilliant surgeon. Perhaps I could help you with that hump.Igor (looks puzzled, deadpan): What hump?[Cut to them boarding the carriage, Igor climbing on the outside like a spider, grinning wildly.]It’s a joke, of course. One of the best. A perfectly delivered absurdity that only Mel Brooks and Marty Feldman could pull off. But like all great comedy, it tells a deeper truth.Last night, standing in front of the Tower of London, recording one of our On Location recaps with Sean Martin, that scene came rushing back. We joked about invisible humps and cybersecurity. And the moment passed. Or so I thought.Because hours later — in bed, hotel window cracked open to the London night — I was still hearing it: “What hump?”And that’s when it hit me: this isn’t just a comedy bit. It’s a diagnosis. Here we are at Infosecurity Europe, celebrating its 30th anniversary. Three decades of cybersecurity: a field born of optimism and fear, grown in complexity and contradiction.We’ve built incredible tools. We’ve formed global communities of defenders. We’ve turned “hacker” from rebel to professional job title — with a 401(k), branded hoodies, and a sponsorship deal. But we’ve also built an industry that — much like poor Igor — refuses to admit something’s wrong.The hump is right there. You can see it. Everyone can see it. And yet… we smile and say: “What hump?”We say cybersecurity is a priority. We put it in slide decks. We hold awareness months. We write policies thick enough to be used as doorstops. But then we underfund training. We silo the security team. We click links in emails that say whatever will make us think it’s important — just like those pieces of snail mail stamped URGENT that we somehow believe, even though it turns out to be an offer for a new credit card we didn’t ask for and don’t want. Except this time, the payload isn’t junk mail — it’s a clown on a spring exploding out of a fun box.Igor The hump moves, shifts, sometimes disappears from view — but it never actually goes away. And if you ask about it? Well… they were wrong then, weren’t they?That's because it’s not a technology problem. This is the part that still seems hard to swallow for some: Cybersecurity is not a technology problem. It never was.Yes, we need technology. But technology has never been the weak link.The weak link is the same as it was in 1995: us. The same it was before the internet and before computers: Humans.With our habits, assumptions, incentives, egos, and blind spots. We are the walking, clicking, swiping hump in the system. We’ve had encryption for decades. We’ve known about phishing since the days of AOL. Zero Trust was already discussed in 2004 — it just didn’t have a cool name yet.So why do we still get breached? Why does a ransomware gang with poor grammar and a Telegram channel take down entire hospitals?Because culture doesn’t change with patches. Because compliance is not belief. Because we keep treating behavior as a footnote, instead of the core.The Problem We Refuse to See at the heart of this mess is a very human phenomenon:vIf we can’t see it, we pretend it doesn’t exist.We can quantify risk, but we rarely internalize it. We trust our tech stack but don’t trust our users. We fund detection but ignore education.And not just at work — we ignore it from the start. We still teach children how to cross the street, but not how to navigate a phishing attempt or recognize algorithmic manipulation. We give them connected devices before we teach them what being connected means. In this Hybrid Analog Digital Society, we need to treat cybersecurity not as an optional adult concern, but as a foundational part of growing up. Because by the time someone gets to t

Ep 192From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We’re Losing the Art of Understanding Each Other | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We’re Losing the Art of Understanding Each Other May 21, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliThere’s this thing I’ve dreamed about since I was a kid.No, it wasn’t flying cars. Or robot butlers (although I wouldn’t mind one to fold the laundry). It was this: having a real conversation with someone — anyone — in their own language, and actually understanding each other.And now… here we are.Reference: Google brings live translation to Meet, starting with Spanish. https://www.engadget.com/apps/google-brings-live-translation-to-meet-starting-with-spanish-174549788.htmlGoogle just rolled out live AI-powered translation in Google Meet, starting with Spanish. I watched the demo video, and for a moment, I felt like I was 16 again, staring at the future with wide eyes and messy hair.It worked. It was seamless. Flawless. Magical.And then — drumroll, please — it sucked!Like… really, existentially, beautifully sucked.Let me explain.I’m a proud member of Gen X. I grew up with cassette tapes and Walkmans, boomboxes and mixtapes, floppy disks and Commodore 64s, reel-to-reel players and VHS decks, rotary phones and answering machines. I felt language — through static, rewinds, and hiss.Yes, I had to wait FOREVER to hit Play and Record, at the exact right moment, tape songs off the radio onto a Maxell, label it by hand, and rewind it with a pencil when the player chewed it up.I memorized long-distance dialing codes. I waited weeks for a letter to arrive from a pen pal abroad, reading every word like it was a treasure map.That wasn’t just communication. That was connection.Then came the shift.I didn’t miss the digital train — I jumped on early, with curiosity in one hand and a dial-up modem in the other.Early internet. Mac OS. My first email address felt like a passport to a new dimension. I spent hours navigating the World Wide Web like a digital backpacker — discovering strange forums, pixelated cities, and text-based adventures in a binary world that felt limitless.I said goodbye to analog tools, but never to analog thinking.So what is the connection with learning languages?Well, here’s the thing: exploring the internet felt a lot like learning a new language. You weren’t just reading text — you were decoding a culture. You learned how people joked. How they argued. How they shared, paused, or replied with silence. You picked up on the tone behind a blinking cursor, or the vibe of a forum thread.Similarly, when you learn a language, you’re not just learning words — you’re decoding an entire world. It’s not about the words themselves — it’s about the world they build. You’re learning gestures. Food. Humor. Social cues. Sarcasm. The way someone raises an eyebrow, or says “sure” when they mean “no.”You’re learning a culture’s operating system, not just its interface. AI translation skips that. It gets you the data, but not the depth. It’s like getting the punchline without ever hearing the setup.And yes, I use AI to clean up my writing. To bounce translations between English and Italian when I’m juggling stories. But I still read both versions. I still feel both versions. I’m picky — I fight with my AI counterpart to get it right. To make it feel the way I feel it. To make you feel it, too. Even now.I still think in analog, even when I’m living in digital.So when I watched that Google video, I realized:We’re not just gaining a tool. We’re at risk of losing something deeply human — the messy, awkward, beautiful process of actually trying to understand someone who moves through the world in a different language — one that can’t be auto-translated.Because sometimes it’s better to speak broken English with a Japanese friend and a Danish colleague — laughing through cultural confusion — than to have a perfectly translated conversation where nothing truly connects.This isn’t just about language. It’s about every tool we create that promises to “translate” life. Every app, every platform, every shortcut that promises understanding without effort.It’s not the digital that scares me. I use it. I live in it. I am it, in many ways. It’s the illusion of completion that scares me.The moment we think the transformation is done — the moment we say “we don’t need to learn that anymore” — that’s the moment we stop being human.We don’t live in 0s and 1s. We live in the in-between. The gray. The glitch. The hybrid.So yeah, cheers to AI-powered translation, but maybe keep your Walkman nearby, your phrasebook in your bag — and your curiosity even closer.Go explore the world. Learn a few words in a new language. Mispronounce them. Get them wrong. Laugh about it. People will appreciate your effort far more than your fancy iPhone.Alla prossima,— Marco 📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 L
Ep 191Why Humanity’s Software Needs an Update in Our Hybrid World — Before the Tech Outpaces Us | Guest: Jeremy Lasman | Redefining Society And Technology Podcast With Marco Ciappelli
Guest:Guest: Jeremy LasmanWebsite: https://www.jeremylasman.comLinkedIn: https://www.linkedin.com/in/jeremylasman_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society & Technology PodcastVisit Marco's website 👉 https://www.marcociappelli.com _____________________________This Episode’s SponsorsBlackCloak 👉 https://itspm.ag/itspbcweb_____________________________Show Notes Blog:In this thought-provoking episode of Redefining Society & Technology, I sit down with Jeremy Lasman to question the most overlooked gadget in the human-tech equation: our own mind. We ask — if we keep updating our devices, why don’t we update the inner operating system that powers our thoughts, creativity, and connection to the world?Jeremy, a former SpaceX technologist turned philosopher-inventor, shares his journey from corporate IT to what he calls his “soul’s work”: challenging the legacy software running our lives — fear-based, outdated models of thinking — with something he calls “Imagination Technology.” It’s not metaphorical. It’s a real framework. And yes, it sounds wild — but it also makes a lot of sense.We touch on everything from open-source thinking to quantum consciousness, from the speed of technological evolution to the bottlenecks of our cultural structures like education and societal expectations. At the center is a call to action: we need to stop treating passion as a luxury and instead recognize it as the fuel for personal and collective evolution.Together, we reflect on how society tends to silo disciplines, discourage curiosity, and cling to binary thinking in a world that demands fluidity. Jeremy argues that redefining society begins with redefining the self — tearing down internal walls, embracing timelessness, and running life not on fear, but on imagination.Is this transhumanism? Is it spiritual philosophy dressed up in tech language? Maybe. But it’s also deeply human — and urgent. Because in a world where AI and tech evolve by the day, we can’t afford to be running on emotional floppy disks.So here’s the challenge: what if the next big upgrade isn’t an app, a device, or even a new piece of hardware — but a reprogramming of how we see ourselves?Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.