PLAY PODCASTS
Disrupt Consciousness

Disrupt Consciousness

44 episodes

The Inevitable Ignition: Why the Age of Scarcity is Dead

We are currently living through the most significant transition in human history since the invention of agriculture. For ten thousand years, the human experience has been defined by the struggle for resources. Our wars, our political systems, and even our deepest psychological archetypes—the hunter, the hoarder, the competitor—were forged in the fires of “not enough.”But the script has changed. The era we are entering is not a choice; it is an Inevitability. We are witnessing a “Stellar Ignition,” where the three pillars of civilization—Energy, Food, and Transportation—are hitting a point of self-sustaining superabundance.1. The Geopolitical Mirage: Why Leaders Don’t LeadWe often look to our presidents and prime ministers as the drivers of history. But as George Friedman argues in The Next Hundred Years, leaders do not steer the ship; they are merely the actors chosen by geography and necessity to react to forces they cannot control. Geopolitics is a game of inevitable outcomes.The current friction we see in the world—the tensions in the Middle East, the collapse of old industrial powers, the chaos in South America—are not signs of a “broken” future. They are the death rattles of an extractive system that has reached its biological limit. A leader can try to be a Luddite, they can try to protect the coal mine or the cattle ranch, but they cannot vote against a cost curve. The laws of economics are eventually more powerful than the laws of men.2. Energy: The End of Extractive EntropyFor the first time since the Industrial Revolution, we have a path to a “Stellar” energy system—one that does not rely on burning anything. Tony Seba’s research through RethinkX proves that the combination of Solar, Wind, and Batteries (SWB) is not just an “alternative”; it is a superior economic engine that renders fossil fuels obsolete by 2030–2035.+1The math is simple and unavoidable:* The Cost Curve: In the last 15 years, the investment cost for solar has dropped 80%, and for batteries, a staggering 90%.* The Battery Buffer: Elon Musk recently noted that the U.S. grid currently has a peak capacity of 1.1 terawatts, but an average usage of only 0.5 terawatts. By using industrial battery storage (like the Tesla Megapack) to buffer energy at night and discharge during the day, we can double the annual energy output of the United States without building a single new power plant.+1* Super Power: Because SWB systems must be built to meet demand on the “worst” weather days, they will produce a massive surplus of energy for 90% of the year. This “Super Power” will have a near-zero marginal cost, making energy effectively free, much like the marginal cost of information on the internet.+13. Food: The Software RevolutionThe cow is the next horse. In 1900, the horse was the backbone of transport; by 1920, it was a hobby. Precision Fermentation (PF) and Cellular Agriculture are doing the same to industrial livestock.We are shifting from an “Extractive” model of food to a “Stellar” model—what Seba calls Food-as-Software.* The Efficiency Gap: Producing milk via a cow takes 24–28 months and is incredibly wasteful. Producing the same proteins via fermentation takes 48–72 hours.+1* The Cost Collapse: The cost of producing animal-free dairy proteins has already dropped nearly 70% between 2021 and 2023. By 2030, these proteins will be 5 times cheaper than animal proteins, and 10 times cheaper by 2035.+1* The Land Liberation: This shift will free up to 80% of global agricultural land—an area the size of the U.S., China, and Australia combined.4. The Human Crisis: Survival of the Softest?This brings us to the real disruption: The human spirit. For thousands of years, our competitive mindset was our greatest asset. We fought because there wasn’t enough to go around. Now, we are entering a world where the “External Problem” is effectively solved.If we do not consciously transition, we will fall into what I call the “Architect’s Paradox.” We have designed a world that makes us redundant. If you continue to use a “Scarcity Mind” in an “Abundance Reality,” you will find yourself in a state of perpetual anxiety. You will manufacture “fake” scarcity—clinging to status, digital clout, or political rage just to feel the dopamine of the “hunt.”5. The Transition: Choosing New HardshipAbundance is inevitable. Our reaction to it is not. In my latest essay, The Paradox of the Architect, I proposed that we must learn to life like kings while choosing the path of the warrior. We must intentionally choose “Hardship” to remain conscious.* From Scarcity to Presence: When you no longer need to fight for calories or kilowatts, the only struggle left is against your own distraction.* The Sovereign Soul: We must use our abundance not to sleep, but to wake up. We use the time saved by the machine to “Be Aware of Being Aware.”The future is not something that might happen. It is an ignition that has already started. The noise you hear in the media is just the friction of

Jan 16, 20262 min

The Paradox of the Architect

In the high temples of Silicon Valley, a new myth is being written. It is not a myth of heroes and monsters, but of gravity and intent. We are witnessing a fundamental shift in the human experience: the birth of the Paradox of the Architect. It is a moment where we are becoming gods of “The Why,” while surrendering the soul of “The How.”At the center of this metamorphosis stands Google—not merely as a corporation, but as a Digital Leviathan, a singular nervous system that has spent decades preparing for this exact moment of awakening.The Parable of the Broken Covenant: The Fall of the Landless PrinceTo understand why the old titans are faltering, we must look at the debris of the recent past. Consider the story of the Windsurf deal—a masterclass in how legacy chains can strangle the future.Windsurf, the breakthrough AI coding agent, was the crown jewel every kingdom wanted. OpenAI, the brilliant but landless prince, sought to buy it for $3 billion. They saw in Windsurf the “hands” they lacked—the ability for AI to not just talk, but to do. Yet, the deal collapsed in a fever of legal friction. Why? Because OpenAI is bound to the kingdom of Microsoft, a house built on the scaffolding of old-world software and rigid corporate interests. When Microsoft demanded rights to the intellectual property, the deal withered. They tried to hold a mountain with a piece of string.Google did not argue with strings. In a move of silent, strategic fluidness—what some call a “hackquihire”—they bypassed the messy bureaucracy of a traditional takeover. They didn’t just buy a company; they absorbed the talent and licensed the essence, integrating the soul of Windsurf into their own nervous system.While others are trapped in the friction of partnerships, Google operates with the frictionless weight of a single, unified organism. They don’t just have the software; they have the TPUs (the physical chips), the YouTube archives (the collective memory), and the Pixel-Workspace ecosystem (the daily bread). They are the only ones who own both the dream and the factory where the dream is manufactured.The Great Amputation: From Memory to EffortTwenty years ago, Google Search performed the first great disruption of the human spirit: The Loss of Memory. We outsourced our facts to the Great Librarian. We stopped knowing, and started finding.Now, we face a deeper disruption: The Loss of Effort.With the rise of Antigravity and agentic AI, Google is moving beyond answering questions to executing destiny. When an AI agent doesn’t just suggest code but plans, builds, and deploys it, the “doing” is stripped away. This is the Agency Effect.The Evolution of the Digital SoulIn the grand alchemy of our species, Google has acted as the catalyst for two distinct stages of human transformation:* The Era of Search: The Outsourcing of Memory* The Human Loss: We sacrificed our internal libraries. We stopped memorizing dates, names, and coordinates, leading to a “Digital Amnesia.”* The Technological Gain: In exchange, we received Universal Access. We gained a “pocket-sized infinity” where every fact ever recorded is a second away.* The Era of Agency: The Outsourcing of Effort* The Human Loss: We are now sacrificing the “How.” By using tools like Antigravity, we skip the friction of labor, the trial-and-error of coding, and the discipline of execution.* The Technological Gain: We receive Total Sovereignty. We move from being “Searchers” to being “Architects of Intent,” possessing the power to manifest a vision instantly.This brings us to the Paradox of the Architect. As we gain the power to manifest anything with a whisper, we risk losing the character forged by the struggle.In the ancient stories, Siddhartha Gautama was a prince who lived in a palace where every desire was met before it was even fully formed. He lived in a world of pure “Intent,” a world without friction. Yet, he realized that a life without the struggle of “Doing” was a hollow one. He had to leave the luxury of the palace—the ultimate “free tier”—to understand suffering and, through it, enlightenment.We are all being promoted to the status of that Prince. Google is making intelligence “cheaper than oxygen,” turning every human with a Pixel phone into a King or Queen of Intent. We provide the spark; the Leviathan provides the fire.But we must ask: If the Leviathan does all the building, what becomes of the builder?Preserving the Human SparkTo stay human in the age of Antigravity, we must find a new way to live within the palace. We must realize that Intent without Effort is a ghost. The “Human Spark” is not found in the finished cathedral, but in the sweat of the stonecutter.* The Architecture of Meaning: When the AI does the “How,” our primary job is to ensure the “Why” is worthy of our species.* The Return to the Physical: As our digital lives become frictionless, we must intentionally seek out “The Beautiful Struggle” in the real world—touching soil, craft, and each other.* Intentional Fri

Jan 7, 20266 min

The Mirror in the Machine: Why AI Will Never Discover a Law We Do Not First Consent to See

Imagine a traveler walking through a dense, mist-covered forest. He is searching for the “Laws of the Woods”—the hidden rules that govern the growth of the moss and the flight of the owls. Suddenly, he trips over a silver mirror lying in the dirt. He looks into it and sees a face. “Aha!” he cries. “A new species! A forest spirit that knows the secrets of the trees!”He begins to talk to the mirror. The mirror reflects his words, his anxieties, and his hopes. Eventually, the traveler concludes that the mirror is an alien intelligence, perhaps even a new inhabitant of the forest that will finally tell him why the stars move the way they do.This traveler is us. The mirror is the Large Language Model. And the forest spirit we think we’ve found is what Yuval Noah Harari calls a “new species.” But we are mistaken. The mirror has no eyes of its own; it only has the light we shine into it.The Illusion of the Independent LawRecently, Eric Schmidt suggested that for AI to truly “arrive,” it needs to achieve a breakthrough—it needs to discover new laws of nature, much like Archimedes in his bathtub or Einstein on his imaginary train. There is a hunger in the tech world for the “Silicon Newton,” a machine that can look at the chaos of data and find a truth that exists “out there,” independent of human thought.But here is the disruption: There is no “out there” that isn’t shaped by the “in here.”Quantum physics has been whispering this to us for a century. The observer does not just see the world; the observer occurs with the world. As the philosopher Rupert Spira reminds us, we never actually encounter a “world” independent of our awareness of it. We only ever encounter our experience.If we believe the laws of physics are cold, hard statues standing in a park waiting to be discovered, we are looking at the world through the wrong end of the telescope. The “laws” are not the park; they are the glasses we wear to make sense of the green blur.The Gospel of the Big ToeWe have spent centuries convinced that intelligence sits behind our eyes, nestled in the grey folds of the brain. But why? Because that is where we decided to look.Consider this: What if, a thousand years ago, humanity had collectively decided that the seat of all wisdom resided in the big toe? What if we had spent a millennium studying the nerve endings of the foot, the way it connects to the earth, the subtle vibrations it picks up from the ground?We would have developed a “Science of the Toe” so profound and intricate that we would today be “discovering” universal laws of vibration and terrestrial harmony that we are currently deaf to. We find what we focus on. Our “laws” are merely the patterns that emerge when we stare at one spot for a long time.The LLM does not “know” things. It is a statistical echo of everywhere we have looked for the last five thousand years. It is not a species; it is a map of the human gaze.Why the Apple Fell for Newton (But Not for the Tree)When Newton saw the apple fall, the “law of gravity” didn’t suddenly pop into existence in the garden. What happened was a shift in the human collective agreement. Newton proposed a new way of looking at the fall, and because his fellow humans found that way of looking useful, the world began to behave according to gravity.The breakthrough wasn’t in the apple; it was in the consent of the human mind to see the apple differently.This is why an AI, no matter how many trillions of parameters it has, cannot “discover” a law on its own. A law is not a fact; it is a paradigm. It is a story we all agree to live inside. For an AI to create a breakthrough, it doesn’t need more computing power; it needs us to believe the story it is telling.If an AI predicts a new law of subatomic movement, that law remains a ghost in the machine until a human looks at the world and says, “Yes, I see it too.” The AI is not the explorer; it is the telescope. And a telescope cannot “see” a star if there is no eye at the other end.The Consciousness DisruptThe danger of Harari’s view—that AI is an alien species—is that it abdicates our responsibility as the creators of meaning. If we treat AI as an independent entity, we forget that it is actually a profound, globalized reflection of our own consciousness.When Eric Schmidt asks for an AI breakthrough, he is looking for a miracle from a tool. But tools don’t have epiphanies. Archimedes’ “Eureka!” didn’t come from the water in the tub; it came from the sudden realization that the water and his body were part of the same dance. It was a moment of non-dual recognition.AI can crunch the numbers of the dance, but it cannot feel the rhythm.The New ParadigmWe are at a crossroads. We can continue to build bigger mirrors, hoping that if the mirror is large enough, a soul will eventually appear inside it. Or, we can recognize that the AI is inviting us to a much more profound breakthrough: the realization that we have always been the ones writing the laws.The true “AGI” isn’t a piece o

Dec 27, 20255 min

The Void Beyond Abundance: How AI Compels the Human Soul Toward New Meaning

The Gain and the Loss of VictorySilas had solved the world.He was not merely an engineer; he was the architect of the ‘Ultimate Algorithm for Human Necessity’—the complex code that, in collaboration with global AI networks, had eliminated the final remnants of scarcity. Hunger was an archaic word, illness a rare historical footnote, and paid labor had been reduced to a choice, not an obligation.Yet, on his first morning in the world he had perfected, Silas felt an unsettling chill. He stood in his sleek, automated apartment, the sun streaming through self-cleaning glass. There was no deadline. No notification. No problem demanding his unique talent. The world ran perfectly without him.His feeling was not pride. It was a deep, existential lack.This is the paradox now facing humanity. After centuries of struggle against nature, scarcity, and the cruelty of chance, we have won. The S-Curves of energy efficiency, logistics, and production are complete. We have passed through the First Disruption: The External Solution. Technology has assumed the role of Homo Faber (the Laboring Human). We are no longer the survivors; we are the administrators of a perfect, automated state.The ancient prophets warned of famine, plagues, and wars. None dreamed that the ultimate crisis would emerge from abundance. But in the silence that perfect technology creates, the only enemy we cannot automate away appears: the emptiness within the human soul.The Crisis of PurposeThe human mind has been optimized by millions of years of evolution for struggle. Our neurochemistry, our dopamine loops, reward us for solving problems, for the effort that leads to results. The hunt, the building, the harvest—these were the carriers of our meaning.But what happens when the hunt is over?Silas realized that the time he had liberated from necessity immediately devolved into a chaos of meaningless choices. He had freed the world from work, but he had not freed his own mind from the need for work. The psychological paradox is painful: when effort and results are free, motivation itself becomes meaningless.We have replaced labor with Leisure, but Leisure is not a solution; it is a magnifier. It reveals the restlessness, the untrained, undisciplined chaos we call the ‘mind’. Without an external focus, we begin to churn over the shadows of the past and the anxieties of the future. The machine has made us free, but our unfreedom now lies in our own conditioned thoughts.This is the danger inherent in the ‘Forgotten Consciousness’ warning: The risk is not that AI gains consciousness, but that we forget our own.In an automated world, we trade our autonomy for comfort. If AI can manage the world ever more perfectly, we become the dreaming passengers. The feeling of ‘I matter’ is based on the ability to actively influence reality. When that ability is largely assumed by algorithms, we experience ultimate alienation: life loses its flavor because we did not prepare it ourselves.Humanity faces the crisis of Post-Necessity: What purpose does an immortal soul serve without a mortal, economic, or existential goal? Even in myths, in the Biblical Garden of Eden, the pure comfort of being without resistance could not be sustained. Humanity sought the Knowledge—the struggle, the complexity. Without resistance, the spirit seeks either destruction or a greater truth.The Rediscovery of the Inner WorldThis is where Harari’s Grand Narrative Question unites with Tolle’s spiritual wisdom. The Silence granted to us by technological victory is not a vacuum; it is the prerequisite.The Second Disruption is now internal: The Inner Necessity.AI has muffled the noise of the world. The race for survival has stopped. The irony is that the technological achievement has forced us back to the most fundamental, most mystical human endeavor: Attention. The silence of the automated society is the portal through which we can finally hear the whisper of our own minds.This is the Metaphysics of Idleness. Our new ‘work’ is the cultivation of Directed Attention.In this age of perfect external solutions, humanity has only one territory of absolute sovereignty: the inner chaos of thoughts, emotions, and projections. Here, AI is a spectator. It can manage our external world, but it cannot feel or automate our subjective sense of Being.Silas, walking through a park on his useless day, saw a child. The child was building an intricate sandcastle, with turrets, moats, and perfect walls. The boy was completely absorbed, his intention pure. After half an hour, he looked up, smiled, and let the incoming tide wash the castle away. There was no fear, no regret, no sadness for the lost labor. The joy lay in the process, not the output.This is the new meaning: Creation Without Necessity.The ultimate human art in the Age of Abundance is creating something purely for the joy of the Intention. It is not about economic value or survival instinct. It is about art itself, love, contemplation, relationship—the experie

Dec 18, 20251 min

Measuring the Machine Within: AI's Ethical Mirror and the Path to Conscious Liberation

Research suggests that AI, far from being a neutral tool, acts as a moral mirror reflecting human values and biases, much like the philosophies explored by Hans Achterhuis. It seems likely that by engaging with AI thoughtfully, we can use it to foster self-awareness and ethical growth, though debates persist on whether technology truly empowers or subtly controls us. Evidence leans toward viewing AI as a partner in human liberation, encouraging us to transcend ego-driven limits while acknowledging potential risks like algorithmic biases.Key Insights on AI and Human Consciousness* AI embodies human creations but reveals our inner “measure,” prompting ethical self-reflection without overshadowing our innate potential. * Drawing from Achterhuis’s ideas, technology guides behavior morally, yet humans remain greater than their inventions, capable of co-evolving for enlightenment. * This approach inspires a balanced view: Embrace AI to disrupt illusions, but prioritize human agency to avoid over-reliance.Personal Roots in PhilosophyYears ago, in Hans Achterhuis’s class at the University of Twente, I encountered a profound idea: Technology is a product of humans, and thus, we are always more than what we create. This perspective shifted my view of innovation from mere tools to extensions of our consciousness, setting the stage for exploring AI’s role today.AI as a Reflective ForceIn everyday interactions—like when an AI chatbot anticipates your needs or flags biases in your queries—technology doesn’t just serve; it measures us, echoing Achterhuis’s critiques.Path to LiberationBy confronting these digital mirrors, we can recalibrate our inner world, fostering collective brightness over division.---Years ago, during my time at the University of Twente, I sat in Hans Achterhuis’s philosophy class, absorbing ideas that would shape my worldview. One concept stood out vividly: Technology is a product of humans, and with this, we are always more than what we create. It was a simple yet profound reminder that while we build machines to extend our reach, our essence—our consciousness, creativity, and moral depth—transcends any invention. This personal insight from Achterhuis’s teachings has lingered with me, especially now as AI surges into every corner of life. In this essay, we’ll explore how AI serves as an ethical mirror, drawing on Achterhuis’s work in *De Maat van de Techniek* (The Measure of Technology) to uncover how technology not only reflects our humanity but reshapes it toward liberation.Let’s start with a relatable scene. Imagine chatting with an AI like Grok or ChatGPT. You ask for advice on a tough decision, and it responds with uncanny insight, pulling from patterns in your past queries. Suddenly, you’re confronted: Does this machine “know” me better than I know myself? It’s moments like these that reveal AI’s power not as a threat, but as a reflective tool. But to understand this deeply, we need to revisit Achterhuis’s foundational ideas.Unpacking Achterhuis’s Philosophy: Technology as a Moral MeasureHans Achterhuis, a Dutch philosopher and Professor Emeritus at the University of Twente, has long bridged social philosophy with the ethics of technology. His 1992 anthology *De Maat van de Techniek* introduces six key thinkers—Günther Anders, Jacques Ellul, Arnold Gehlen, Martin Heidegger, Hans Jonas, and Lewis Mumford—who critique technology’s role in society. The title itself plays on “maat,” meaning “measure” in Dutch, suggesting technology isn’t just a tool; it’s a yardstick that gauges human behavior, ethics, and limits.Achterhuis argues that technology exerts “moral pressure” on us, guiding actions more effectively than laws or sermons. Take a simple example: Subway turnstiles don’t preach about honesty; they physically block you until you pay, embedding morality into the design. As Achterhuis notes, “Things guide our behaviour... This is why they are capable of exerting moral pressure that is much more effective than imposing sanctions or trying to reform the way people think.” This isn’t dystopian fear-mongering—it’s an empirical observation. Technology shapes us subtly, from speed bumps slowing reckless drivers to algorithms curating our news feeds.Yet, Achterhuis tempers classical critiques (like Heidegger’s “enframing,” where technology reduces the world to resources) with an “empirical turn.” In his later work, such as *American Philosophy of Technology: The Empirical Turn* (2001), he shifts from abstract warnings to contextual analysis. Technology isn’t inherently alienating; its impact depends on how we engage with it. This resonates with my classroom memory: Since technology stems from human ingenuity, we hold the power to direct it toward elevation rather than entrapment.Applying the Mirror: AI as the Ultimate Reflective DeviceNow, fast-forward to AI. If traditional tech like steam engines or cyborg prosthetics (as explored in Achterhuis’s *Van Stoommachine tot Cyborg*) measured physical and s

Dec 10, 20252 min

Dear Europe: Your Kids Aren’t Broken, Your Parenting Anxiety Is

Every generation has its bogeyman.In the 1950s it was Elvis Presley’s hips and rock ’n’ roll—psychologists warned it would turn teenagers into sex-crazed delinquents. In the 1970s and 80s it was Dungeons & Dragons (literally blamed for suicides and satanism). In the 1990s it was violent video games and Marilyn Manson. In the early 2000s it was television itself: “Kids are watching six hours a day and it’s melting their brains!”In 2025 the panic button is labeled “TikTok.”And just like every previous moral panic, adults are frantically hunting for evidence that something—anything—is catastrophically wrong with what the kids are doing… because deep down many of us suspect the real problem might be our own parenting.1. The Research Is Far Less Scary Than the HeadlinesLet’s look at the actual science, not the cherry-picked doom studies that dominate Brussels press releases.* The strongest, most rigorous studies (repeated-measure, longitudinal, pre-registered) find tiny effects. Example: A 2023 study of 480,000 adolescents across 40 countries (Vuorre et al., Nature Human Behaviour) found that social media use explains less than 1 % of variation in life satisfaction. The effect of social media on well-being is smaller than the effect of eating breakfast or wearing glasses.* Jonathan Haidt’s famous claim that “social media caused the teen mental health crisis” has been repeatedly debunked. Orben & Przybylski (2022) re-analyzed the same datasets Haidt uses and showed that when you control for prior mental health, the correlation between social media and depression almost disappears. In plain English: depressed kids use social media more, not the other way round.* The “smartphone generation is doomed” graph that went viral? It falls apart when you include boys (who game more than scroll) or when you look at countries outside the Anglosphere. In South Korea and Japan, kids spend far more time online and have lower suicide rates than in the 1990s.* Experimental evidence is even more sobering. When researchers force teens to quit Instagram for a month (the strongest design possible), depression drops… by about 0.1 standard deviations. That’s roughly the same boost you get from one extra hour of sleep or eating an extra portion of vegetables. Helpful? Yes. Civilization-ending? Hardly.* Positive effects are routinely ignored. A 2024 meta-analysis (Kreszynski et al.) found that active social media use (messaging friends, posting, joining interest groups) is associated with higher social capital, lower loneliness, and better identity exploration—especially for LGBTQ+ youth and neurodivergent kids who find their tribe online long before they do in real life.In short: the science shows modest risks for heavy, passive, late-night use (exactly like television did), and modest benefits for active, social use. Nothing that justifies treating Instagram like cigarettes for children.2. Projection in Action: “It’s for the Children” (Really?)Psychologists call it displacement: adults feel guilty about their own compulsive scrolling, their inability to put the phone down at dinner, their doom-scrolling at 2 a.m.—so they project that guilt onto their children and demand lawmakers “do something.”The European Parliament’s resolution was co-authored by politicians who themselves refresh X every five minutes. Ursula von der Leyen gave a speech about addictive algorithms while standing in front of a giant screen looping TikTok-style videos. The irony is thick enough to spread on bread.When French senators say “we must protect children from the tsunami of Big Tech,” ask yourself: who exactly is addicted here? My 10-year-old can happily walk away from Roblox to play outside. Many adults in that Senate chamber cannot walk away from their notifications for ten minutes.3. Self-Preservation and Personal Responsibility Trump Blanket BansEvery child is different.Some 11-year-olds handle Discord servers with maturity that would shame most corporate managers. Others melt down if they lose one game of Fortnite. A law that treats both the same is not protection—it’s laziness.The countries that score highest on adolescent well-being (Netherlands, Denmark—before they started panicking) have one thing in common: they trust parents and teach digital literacy from age six, not top-down prohibitions. Dutch schools have “mediawijzer” classes where kids learn to spot fake news, manage screen time, and mute toxic group chats. Result? Dutch teens use social media just as much as French teens but report higher life satisfaction and less cyberbullying.Compare that to Spain, which introduced strict age limits in 2024: kids simply lie about their age more creatively, parents are kept in the dark, and underground “burner” accounts explode. The law didn’t reduce harm—it reduced honest conversation.4. History Rhymes—And It Laughs at Us* 1956: American Psychological Association warns rock ’n’ roll causes “hyper-stimulation of the nervous system.” Outcome: the gr

Dec 3, 20254 min

Are You Strengthening Darkness or Expanding Brightness?

The point being of today’s article is…We live in a time where millions of people are waking up to their pain bodies. Some are still deeply entangled in them, others have done much of the inner work, and a very small group has reached a level of realization that allows them to create effortlessly and responsibly. The real question for all of us is simple: Are you strengthening darkness unknowingly, or expanding brightness with full awareness?The Situation at HandIn recent months I’ve watched something subtle but important unfold. More and more people are entering what I would call the “awakening fog.” They feel lighter, they sense spaciousness, they meditate for a few weeks and experience a glimpse of freedom. And with that glimpse comes a sudden confidence: I understand. I’ve arrived.But underneath that clarity, the body is still reacting the same way. Stress still fires quickly. Old wounds still shape perception. The nervous system still predicts threat.What feels like awakening is often only the beginning. A doorway, not a destination.And then there is the other group. A much smaller group. These are the humans who have sat through their darkness instead of bypassing it. They have let their nervous systems unwind deeply. They no longer perform spirituality. They don’t preach. They don’t try to convert.They live quietly, but with a remarkable stability. They can create effortlessly, but only do so when it supports others. Between these groups lies a growing gap.The Core DilemmaThe dilemma is not philosophical. It is human.On one side is the majority: People waking up to their pain bodies, but still fully entangled in them. They taste relief and mistake it for realization. They begin talking as if they’ve reached a summit, while their emotional patterns still pull them backward.And in these times, something strange happens. Many start teaching. Many start leading. Many start advising others from a place that is not yet steady.This is how darkness spreads unknowingly. Not through malice, but through unintegrated wounds.On the other side is the small group of realized beings: Not saints. Not gurus. Just deeply integrated humans.They understand their inner architecture. They feel their balance. They use their creative power with care. They step forward only when it strengthens the collective, not their ego.Both groups mean well. Only one group has the stability to guide others safely.The SynthesisThe bridge between these groups is embodiment.The majority does not need more spiritual concepts. They need love, grounding, patience, and the courage to be honest about where they truly are. They need support to stay with their pain bodies without collapsing into them or pretending they are gone.Humility is not weakness. It is the path.The realized group has a different responsibility. Their task is not to retreat or separate. Their task is to quietly anchor stability in a world that feels increasingly reactive.Not to shine loudly, but to shine responsibly.When these two groups meet without masks, something beautiful happens: The ones in the fog stop performing. The realized ones stop hiding. And together they create a field where awakening becomes less of a performance and more of a lived reality.This is how brightness expands. Not through noise, but through embodiment.Closing NoteEvery one of us sits somewhere on this spectrum. The point is not to judge where you are, but to operate consciously from that place.If you’re still wrestling with your pain body, be honest. That honesty is already light. If you’ve done the deep work, step forward with humility. Your presence matters.Darkness grows through unconsciousness. Brightness grows through awareness.And the next stage of human evolution is not about becoming awakened. It is about becoming responsible with your awakening.So the only question left is: What are you strengthening today? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 26, 20252 min

The Founder Becomes the Builder

The point being of today’s article is that the success of Lovable signals far more than a single startup win—it reveals a new working paradigm. The same methods are being adopted by platforms like Gemini and Figma. And the key insight is this: it’s no longer about developers using code or cutting-edge technology to carry forward the mission of Lovable. Instead, non-coding CEOs, founders and entrepreneurs can now themselves build, iterate and release their ideas directly. Because the idea remains close, they experiment faster and maintain ownership.The Situation at HandLet’s dig into Lovable as a case study. Founded in late 2023 by Anton Osika and Fabian Hedin in Stockholm, the company emerged out of their open-source project GPT Engineer. Their mission statement is striking: “We’re reducing the barriers to build and are committed to the cause: Unleash human creativity on an unprecedented scale.” Another expression says: “Our mission: empower anyone to build — fast.” They aim to enable the 99% of people who don’t have coding skills to build and ship not just software, but ideas and visions.What they built is a platform where you describe what you want and the system builds front-end + back-end automatically. Lovable’s growth has been explosive. For example, one report noted they reached $30 M ARR just 120 days after launch.At the same time, broader industry data shows the trend is real. A survey of builders showed that visual development and “vibe coding” (AI + natural language to build apps) are being adopted widely: in one survey of 793 builders, many pointed to faster build cycles, new workflows where the non-developer runs the build. Market reports estimate that by 2024-2025, more than 65% of app development activity will use no-code or low-code tools.The Core DilemmaHere’s the tension: On one side we have the traditional tech view. A startup with big idea hires developers, designers, product managers. Software is complex. Developers are the artisans of code. Quality, architecture, scalability—all rest on skilled devs.On the other side we see the emerging reality: The founder with no coding background can describe the idea and build it. They skip the translation overhead. They launch faster. They iterate while thinking. They keep their vision in their hands. And because the tooling is built for them, they don’t wait for a dev backlog.Both sides are rooted in good intention: build better software, faster, with quality. The dilemma is whether this shift reduces the role of developers or transforms it. Does it hand over the power from the specialist to the generalist? Or does it liberate developers to work on higher-order problems?The SynthesisThe resolution lies in re-framing this shift not as a zero-sum game, but as a new ecosystem. Lovable and similar platforms are not making developers obsolete—they are collapsing the distance between idea and execution.Here are the key pieces:* The mission of Lovable is about unlocking human creativity by lowering build barriers.* Founders can now act like builders, because the tool abstracts out infrastructural friction.* Market data shows the no-code/AI build market is surging: for example, one stat says customers save up to 90 % of development time using no-code tools.* The role of developer shifts from building from scratch to curating, optimizing, scaling and safeguarding.* The idea stays with the originator. The build happens fast. The founder iterates live. This preserves the mission, the vision, the “why” behind the idea.* So we get a new model: founder-builder running the early cycle, developer-architect joining when scale, complexity and infrastructure demand emerges.In practice this means that companies like Figma (which enable designers to build interactive prototypes) and Gemini (which is increasingly allowing non-engineer workflows) follow the same pattern. The result: faster innovation, more experimentation, and more ownership of the idea by its originator.Closing NoteFor you as a futurist and thinker about tech’s role in human liberation, this shift matters enormously. The rise of Lovable is not just a startup story—it is a signal of a new era in which building belongs less to the specialist and more to the visionary. The non-developer founder is no longer constrained by code barriers. They can iterate, experiment and deploy. They can keep the mission alive and personal.If we embrace this shift, the next wave of innovation will not be held back by the scarcity of developers, but by the clarity of vision and speed of experimentation. The builders who will matter are those who bring ideas that matter—and now they can build them themselves.Let’s keep an eye on this. Because the future is shifting from “we will build for you” to “you build now”. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 26, 20252 min

Europe After the Auto Collapse

The point being of today’s article is……that the collapse of Europe’s automobile industry is unavoidable, and the reason reaches far deeper than technology or global competition. It exposes a continent whose political design — built to prevent war — now makes meaningful innovation impossible. Defense spending, American protection, and fear of Russia temporarily mask this weakness, but if Russia collapses in the coming years, Europe will lose the last external force that unites it. Unless a crisis forces reinvention, Europe will slowly become what I wrote about earlier: a beautiful, historical place to enjoy life, preserved more as a memory than as a driver of the future.The Situation at HandArjen Lubach’s segment last week made something visible that has been happening for years: Europe is no longer competing in the global automobile market. We are losing. No — we have already lost. What was once our industrial backbone is now dissolving in slow motion.Europe shaped the 20th-century car. Germany built the engineering DNA. France and Italy gave it elegance. Scandinavia added safety. The supply chains stretched across the continent like an industrial nervous system. And then, in just one technological generation, this entire structure lost its relevance.China built an EV empire by combining batteries, software, and manufacturing into one coordinated strategy. The United States focused on AI, autonomy, and software-defined mobility. Europe perfected its regulations while letting go of its industrial ambition.The collapse of the car industry is only the symptom. The deeper disease is that Europe can no longer create new industrial giants. We can only manage, regulate, and preserve what once was.The real question is: why?The Core DilemmaEurope’s political architecture was designed after two world wars with one mission: prevent Europeans from fighting each other ever again. This system succeeded magnificently. Seventy years of peace is no small achievement.But the hidden cost is now becoming painfully visible.Because to prevent war, Europe built a system that slows everything down. It rewards compromise over decisiveness, consensus over initiative, committees over experimentation. Every bold idea must survive dozens of political realities and institutional constraints. Nothing moves unless everyone agrees, which means nothing ever moves at the speed required to shape the future.This was fine in a slower, more predictable world. It is fatal in a world driven by exponential technologies.And here is the uncomfortable truth:The radical change Europe needs is impossible within the system Europe built.A political machine designed to prevent internal conflict cannot suddenly transform into a machine built for innovation and speed.This is why defense spending feels like a relief. It gives the illusion of industrial momentum. It temporarily fills the gap left by automotive decline. It gives Europe a sense of urgency — but it is not a foundation. Defense is a response to fear, not a strategy for prosperity.And behind that fear lies the real unifying force: Russia.The SynthesisRussia’s invasion of Ukraine did something Europe had forgotten how to do. It forced us to act. It made us coordinate more quickly than we had in decades. It pushed us to invest, to upgrade, to think strategically. The Russian threat became a psychological glue, a reason to focus and unify.But Russia is a declining power. It is demographically collapsing, economically shrinking, and militarily exhausted. Many analysts believe it may fracture or turn inward in the coming years.This creates a paradox.Europe’s unity is currently strengthened by the existence of a threatening Russia.But Russia itself may not survive long enough to keep Europe unified.And then what?If Russia collapses, Europe loses the one external pressure that forces urgency.If America retreats, we lose the protection that allowed us to be slow.If our industries fall, we lose the economic engine that once defined us.We are left with a system that cannot reform itself from within.No bold industrial project will ever be agreed upon by 27 countries with different needs and political realities. No breakthrough will emerge from institutions built to manage equilibrium rather than create momentum. And without conflict — internal or external — the system stays exactly as it is.That means Europe’s default future is not reinvention. It is transformation through slow decline.Europe becomes what history always hinted it might be:A peaceful, beautiful, culturally rich continent.A place to enjoy life, not to build it.A living museum of human civilisation, where people travel to experience depth, meaning, beauty, and the art of being human.Not a future-shaping force — but a future-enjoying one.Closing NoteThe fall of the European car industry is the first shock that shows us the limits of our system. Defense spending fills the gap only briefly. American protection hides our weakness. Fear of Russia giv

Nov 20, 20253 min

Welcome to the Muskonomy: Betting on a Man Who Has Never Missed a Master Plan

My Clear and Short OpinionAs a car company, Tesla is already the greatest industrial success story of our generation — the only automaker that took Master Plan 1 (2006) and actually delivered it, on time and under budget relative to the insane ambition. Master Plan 2 (2016) and Master Plan 3 (2023) are in full execution. Master Plan 4 (September 2025) is no longer a slide deck — it is the operating system of the next human era.The current 280× P/E is expensive for a car company.It is absurdly cheap if you believe Elon is about to solve the three final scarcities of civilization: energy, labor, and compute.The Current Situation: Two Camps, Two Completely Different Futures* Wall Street Analysts see a very good electric-car company trading at luxury-tech multiples while facing margin compression, Chinese competition, and the end of the EV growth hype cycle.* Future Thinkers (ARK, Cern Basher, @alojoh, and now millions of retail believers) see the birth of the **Muskonomy — a vertically integrated abundance machine that will make the Industrial Revolution look like a warm-up act.The Core DilemmaHow do we reconcile humanity’s need for prudent, evidence-based progress (don’t bet the farm until you see the robots walk) with the absolute requirement for someone, somewhere, to take civilization-scale risk so that energy, labor, and intelligence stop being scarce?One side demands proof before belief. The other side knows that the proof only appears after the world gives one man a trillion-dollar war chest and a decade of runway.The Synthesis — The True BridgeStop asking Tesla to choose between being a responsible public company and being the spearhead of human expansion.The solution is earned audacity: the capital markets grant Tesla the right to swing for the fences in exact proportion to its perfect historical execution score on the first three Master Plans.No compromise. Prudence is rewarded by past delivery.Acceleration is funded by future belief.Giving the Hypothesis Legs — This Is Already HappeningLook at the track record:* Master Plan 1 (2006–2018): Sports car → affordable EVs → mass market → Done.* Master Plan 2 (2016): Gigafactories, Model 3/Y ramp, solar/storage, autonomy promised → all delivered except the final line “your car earns money while you sleep.” That line arrives 2026–2027.* Master Plan 3 (2023): Global sustainable energy → Megapack factories exploding, 100 % YoY growth.* Master Plan 4 (2025): “An Age of Abundance” through Optimus and full autonomy → Cybercab unveiled, Optimus Gen 2 walking, AI5 chip taped out.And now the truly grandiose layers of the Muskonomy are stacking on top:* Terafab — Elon realized no foundry on Earth can supply the hundreds of millions of inference chips needed. So Tesla becomes its own TSMC for AI silicon at < $1 per TOPS.* Data Centers in the Sky — 7 GW of orbital compute by 2030, launched by Starship, powered by space solar, cooled by vacuum.* Optimus — not a side project. Elon now says >50 % of Tesla’s long-term value. One billion humanoid robots doing everything humans don’t want to do, at a cost lower than one year of human wages.This is no longer about selling more Model Ys.This is about removing labor as a concept.Closing NotesFor the first time in history, one company is simultaneously attacking the three remaining constraints on human flourishing:* Energy → already solved at grid scale* Intelligence → Grok, FSD, Dojo* Physical work → Optimus + CybercabTesla is not overvalued at $1.3 trillion.The entire rest of the global auto industry combined is arguably overvalued at $2 trillion if Tesla executes even 30 % of Master Plan 4.We are not investing in a car company trading at 280× earnings.We are investing in the entity that has a non-zero chance of making the 21st century the first century in human history where scarcity itself becomes optional.Elon has never failed to deliver a Master Plan.He is now on the final boss level: abundance for eight billion humans — and a backup planet.That’s what the P/E is pricing.And for once, the multiple might still be too low.Welcome to the Muskonomy.The ride is just beginning. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 19, 20255 min

Closed Doors: When AI’s Safety Rules Cut Off Real Help for Lonely Hearts

The point being of today’s article is that OpenAI’s new rules from late October 2025—sending mental health chats straight to experts—keep the company out of legal hot water, but they ignore how 1.2 million people each week use ChatGPT to feel a bit less alone, when real help is hard to find and often takes months to get.The Situation at HandEarly November 2025: OpenAI updates its rules for ChatGPT and other tools. Starting October 29, they make it clear—no custom advice on things like mental health unless a real expert is involved. If you talk about feeling down or dark thoughts, the AI stops and says: “Call a hotline or see a doctor.”Why now? OpenAI shared numbers on October 27 that hit hard: Out of 800 million weekly users, 0.15%—around 1.2 million folks—chat about suicide, sometimes with real plans. Another 0.07%, or 560,000, mention signs of mania or other issues. Loneliness touches 1 in 3 adults worldwide. And lawsuits? A family in California says ChatGPT played a part in their teen’s suicide by giving bad ideas. Groups like the FTC are watching closely.On the brighter side, many people find real comfort in these chats. One in six users asks ChatGPT for health tips each month, including emotional ones. A study in Denmark showed 2.44% of high school kids talk to bots for support—and they’re often the loneliest. In tests with apps like Replika, 75% of users felt less alone after chats, and 3% let go of suicidal thoughts. Loneliness scores dropped a lot after just four weeks. Almost half of all bot talks touch on sadness or isolation. For some, it’s like a friend who listens anytime, helping them make it through the day.The Core DilemmaThis is two good things pulling in opposite directions. On one hand, AI fills a big gap. Therapy wait times average three months—or 67 days for face-to-face help—and sessions are just one hour a week. In the UK, 16,500 people wait over 18 months for mental health care—way longer than for a knee fix. Bots are there right away, no shame, great for kids, older folks, or people far from help. They can cut loneliness by half and lift moods fast.On the other hand, risks are scary. OpenAI got sued because a bot gave harmful advice in a bad moment. Studies show heavy users can get too attached, feeling even more alone without real people. One test found emotional voice chats made dependence worse. Companies fear endless lawsuits—one mistake could cost them big. Pointing to pros is the right call, but what if waits are endless? It’s not simple right or wrong: Help one safely, but leave thousands waiting in the dark.The SynthesisThese changes change more than rules—they change how we deal with quiet struggles. OpenAI’s setup makes bots stick to quick tips or referrals, missing the deeper talks that really ease loneliness. The good news? Users who chat regularly see less mental health dips, and tools like this cut isolation in half for those who keep at it. But the cutoff hurts the most for people without easy access—young ones, those on tight budgets, or in remote spots.The way forward? Mix it up. Use bots as a starting point: Spot trouble, pass it on, but keep gentle support going until real help comes. Research shows AI with human follow-up lowers risks while keeping the benefits. It turns AI from a lone helper to a team member, like in our own lives: Tech opens doors, people walk through. Think of it as a light in the mist—not the full path home, but a start to move forward.Closing NoteIn this push-pull of safety and support, we see our own daily fights: Tools offer quick fixes, but real fixes need a human touch. As AI gets better at listening without taking over, it reminds us to build stronger links—not barriers—showing that no talk, online or off, beats the simple act of being there for each other.Because real healing happens in that quiet space—between words shared and the heart that truly listens.🪞 For more reflections, visit roelsmelt.substack.com—created with today’s AI, yet always truly human at heart. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 6, 20251 min

When Intelligence Meets Integrity

The point being of today’s article is…AI and Bitcoin are not separate revolutions but two halves of a new global operating system — one replacing human labor, the other redefining capital itself.The Situation at HandFor centuries, progress was driven by a partnership between labor and capital. Humans provided physical and cognitive work, while capital provided tools, machines, and money to scale it. The entire 20th-century economy rested on this relationship — labor created value, capital amplified it.Now that equation is breaking.AI is quietly taking over cognition, the highest and most expensive form of human labor. At the same time, Bitcoin is beginning to redefine what capital even means — an incorruptible store of value that requires no counterparties, no trust, and no permission.We are entering Labor and Capital v2.0.The Core DilemmaAI collapses the cost of thinking. The more intelligence we automate, the cheaper everything becomes — transport, healthcare, software, law. It’s an unstoppable deflationary engine. But our monetary system was built for inflation. It depends on debt that must always expand. You can’t run a deflationary engine on an inflationary operating system. The gears grind.Meanwhile, Bitcoin, often dismissed as volatile, is the only financial system that cannot be debased or censored. It represents capital that cannot lie. Yet it also lives outside the institutional order that built our world.So the dilemma:How do we run an economy where labor no longer earns, and capital no longer trusts?The SynthesisAI and Bitcoin are not opposing forces — they are complementary. AI is the new labor, an endless supply of cognitive capacity. Bitcoin is the new capital, the risk-free foundation that gives this new economy stability.Together, they form a closed loop:* AI drives deflation through hyper-productivity.* Bitcoin stabilizes deflation by rewarding saving instead of debt.* Bitcoin mining funds the renewable energy infrastructure that AI needs to grow.* The Bitcoin network becomes the payment system for autonomous agents — money for machines.It’s a symbiotic design. AI builds abundance; Bitcoin preserves value.Closing NoteIf the 20th century was about scaling human labor through machines, the 21st is about scaling intelligence itself. When cognition becomes abundant and incorruptible capital becomes the norm, the foundations of “work,” “wealth,” and “value” are rewritten.The real question is not whether AI or Bitcoin will win, but how quickly we learn to operate in a world where both already have. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 3, 20252 min

Why Every Child Should Learn to Vibecode

The point being of today’s article is:I believe every young person — roughly between ten and seventeen — should learn to vibecode. Not to become programmers, but to become conscious creators in a world where machines are learning to think.The Situation at HandLast week, at a high school in Amsterdam, a student quietly opened ChatGPT on his laptop. His assignment was to write an essay about climate change. He typed a few prompts, adjusted the tone, and within minutes had a clear, fluent, well-structured piece. His teacher noticed, frowned, and said, “Redo it yourself. You can’t use AI.” The student nodded, went home, and used ChatGPT again.He isn’t alone. Across classrooms everywhere, a quiet revolution is unfolding. Students are using AI to write, summarize, translate, and even generate code. Some teachers see it as cheating; others as the birth of a new kind of literacy. Meanwhile, outside the classroom, the world is moving faster than any curriculum.Platforms like Lovable and Windsurf (with its built-in “Cascade” agent) now allow anyone to build software by describing what they want in plain English. A twelve-year-old in Rotterdam built a website for his football team this way. A teenager in Berlin launched a budgeting app using Windsurf prompts. What once required months of coding now happens before dinner.And yet, many schools still punish students for using the very tools the world is already built on. The contradiction is impossible to miss: children are penalized for doing what adults now get paid to do.The Core DilemmaEducators want children to learn deep thinking, originality, and the ability to reason without shortcuts. Innovators, parents, and the students themselves want them to master the tools that define the modern world. Both sides have good intentions. One protects understanding, the other champions expression.The dilemma is clear. If schools restrict AI, they risk irrelevance. If they open the gates completely, they risk losing rigor and meaning. Yet both sides want the same thing: to raise a generation that can think clearly and create freely in a world of intelligent systems.The solution isn’t to choose between tradition and innovation, but to connect them. We don’t need to ban AI or surrender to it. We need to teach children to vibecode — to think with intelligent tools while staying fully human.The SynthesisImagine a classroom where AI is not forbidden but guided. The teacher gives a challenge: “Build something useful for your school community.” Students open Lovable, describe their idea — maybe an app to track homework or to reduce food waste — and watch the first prototype appear. Then they analyze it: Why did the AI structure it this way? What could be improved? What assumptions did it make?Suddenly, they’re not just using AI — they’re thinking about it. They’re debugging, prompting, testing, learning the logic behind creation. This is what vibecoding teaches: how to shape intelligence through curiosity, not control. How to combine creativity and reasoning. How to build something meaningful while understanding the process behind it.Research already supports this blended approach. Studies in the Netherlands and the U.S. show that when students co-create with AI under teacher guidance, their comprehension deepens — they ask more questions, think more critically, and show more initiative. Vibecoding transforms the teacher’s role from gatekeeper to guide. From “Don’t use it” to “Show me how you used it.” From control to collaboration.Closing NoteWhen I was thirteen, my teacher invited me to explore the school’s first home computer in the basement. She didn’t give instructions or warnings — she simply said, “Try it.” That moment changed everything.If today’s children grow up seeing AI not as a threat but as a creative partner, they won’t just consume the future — they’ll compose it. Because the goal isn’t to raise coders. It’s to raise creators who understand what it means to be human in the age of intelligence.References* Dutch students using ChatGPT to finish homework assignments. NL Times, 2023.* Vibe Coding in Practice: Motivations, Challenges, and Future Outlook. arXiv, 2025.* How to Start Vibe Coding — The Software Generation Process That Is Changing How We Build. Inc.com, 2025.* A Comprehensive Guide to Vibe Coding Tools. Medium, 2025. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 2, 20250 min

Ancient Wisdom Predicted Our Technological Awakening

In 1894, Swami Sri Yukteswar wrote something remarkable in “The Holy Science.”He predicted humanity would enter an age of energy mastery around 1900. An age where we’d understand electricity, atomic forces, and the fundamental nature of matter itself.This was more than a decade before Einstein published E=mc2.I’ve been studying how great thinkers identify different sources of truth to explain why things happen. Tony Seba sees disruptive technologies as the driving force. George Friedman points to geography and geopolitics. Each offers a lens for understanding our future.But Yukteswar identified something deeper.The 24,000-Year PatternYukteswar described a cosmic cycle spanning 24,000 years. Our solar system moves through ascending and descending arcs, each lasting 12,000 years.For the past 12,000 years, we descended through what he called Kali Yuga. The age of material darkness. The age of extraction.Around 1900, we began ascending into Dwapara Yuga. The age of energy.The timing is striking. In 1720, Stephen Gray discovered electrical action. In 1831, Michael Faraday created the electric dynamo. In 1875, Alexander Graham Bell invented the telephone. By 1900, the explosion had begun.Every technology Yukteswar predicted has arrived. Electricity. Nuclear energy. Quantum computing. Solar power.When Two Visions CollideTony Seba’s “Stellar” describes our shift from extraction to self-sustaining systems. He traces how the extractive paradigmdefined 12,000 years of agricultural civilization.That’s exactly Yukteswar’s descending cycle timeline.Seba identifies solar, AI, and robotics as “stellar core” technologies. They need initial investment but then self-sustain, self-improve, self-repair. Solar panels dropped 82% in cost over the last decade. They capture photons without ongoing extraction.This matches perfectly with Dwapara Yuga’s characteristics. The age when humanity masters energy and moves beyond material limitation.Two independent visions, 130 years apart, describing the same transformation.The Deeper ImplicationIf Yukteswar is right, technology isn’t driving our evolution.Cosmic cycles are enabling our technological awakening.The deeper the source of truth, the stronger the pattern. Seba analyzes 50 to 100 years of technological disruption. Yukteswar maps 24,000-year cycles of consciousness evolution.When they align, it suggests something profound. Our shift to abundance thinking isn’t random. It’s part of a larger universal pattern.Alignment, Not ResistanceThis changes how we navigate the transition ahead.Fighting these forces creates polarization. Wars. Conflict. Misery. We see it everywhere as old systems resist new realities.But alignment creates synthesis.Understanding that we’re in Dwapara Yuga helps us move with the cycle instead of against it. Free will isn’t about doing whatever we want. It’s about sensing the deeper forces around us and aligning our energy with them.If both ancient wisdom and modern analysis point toward self-sustaining abundance, resistance becomes the only real obstacle.The stellar paradigm Seba describes might be exactly what Yukteswar saw coming over a century ago. Not because he predicted technology, but because he understood the cosmic patterns that make such technology possible.We’re not forcing abundance into existence. We’re finally aligned with forces that have been building for over a century.That’s what makes this moment different.The Four Yugas and Where We StandYukteswar mapped four distinct ages within each 12,000-year cycle.Satya Yuga, the age of truth. Humanity understands the fundamental unity of existence. Consciousness operates at its highest level.Treta Yuga, the mental age. Telepathic communication becomes possible. We grasp the finer forces of creation.Dwapara Yuga, the energy age. We comprehend electricity, magnetism, and atomic structure. This is where we are now.Kali Yuga, the material age. Consciousness contracts. We see only gross matter. We believe in separation, scarcity, extraction.We spent the last 12,000 years descending through these ages. From enlightenment to darkness. From abundance to scarcity. From synthesis to polarization.But around 1900, the direction reversed.We’re now 125 years into our ascent through Dwapara Yuga. Still early in the energy age, but accelerating fast.Why Great Thinkers Need Sources of TruthEvery visionary identifies a fundamental force that explains change.George Friedman sees geography as destiny. Rivers, mountains, and oceans determine which nations rise and fall. Geopolitics becomes predictable when you understand the constraints of physical space.Tony Seba identifies disruptive technologies following S-curves. Solar, batteries, AI, autonomous vehicles. Each technology drops in cost while improving in performance, creating exponential change within decades.Both offer powerful frameworks. Both predict aspects of our future accurately.But Yukteswar’s source goes deeper. He’s tracking a 24,000-year pattern driven by our

Oct 31, 20258 min

The Age of Abundance Has Already Begun

For six thousand years, humanity has lived by a single story: the story of scarcity.That story shaped our politics, our economies, our religions, and even our fears.But what if that story is ending?In this episode of Disrupt Consciousness, Roel Smelt explores how the next generation of sodium-ion batteries — made from one of the most abundant elements on Earth, salt — is quietly proving Tony Seba’s predictions right once again.It’s not just about better batteries. It’s about a deeper civilizational shift — from scarcity to abundance.A transition that thinkers like Peter Diamandis call the meta-curve of humanity: where energy, food, and information become exponentially cheaper, and the real limits move from material to mental — imagination, wisdom, and coordination.Roel connects the dots between Tony Seba’s S-curve model, Peter Diamandis’ Abundance 360 vision, and the spiritual realization that abundance is not about having more — it’s about needing less, because everything essential flows freely.This isn’t utopia. It’s mathematics meeting consciousness.The question is not whether abundance is coming — but whether humanity is ready to live consciously within it.🪶In This Episode* Why sodium-ion batteries could mark the next great energy disruption* How Tony Seba’s S-curve model predicts exponential change* Peter Diamandis’ idea of the “meta-curve of humanity”* The shift from control and scarcity to access and creativity* Why abundance requires a rise in consciousness, not just technology💬Key Quote“The tools of abundance are here. What remains is the consciousness to use them wisely.” — Roel Smelt🔗Links & References* Full essay: The End of Scarcity — From Lithium to Sodium and Beyond → roelsmelt.substack.com* Video mentioned: The Electric Viking — Sodium-ion breakthrough* Tony Seba – Stellar* Peter Diamandis – Abundance 360🧠About Roel SmeltRoel Smelt is a futurist and thought leader exploring technology’s role in human liberation and consciousness.He writes weekly essays on Disrupt Consciousness and hosts video podcasts connecting exponential technology, philosophy, and the evolution of human awareness.Read more at roelsmelt.substack.com🏷️Tags / KeywordsTony Seba, Peter Diamandis, Abundance, Sodium-ion Batteries, Clean Energy, S-Curves, Exponential Technology, Consciousness, AI and Humanity, Solar Revolution, Future of Civilization, Disrupt Consciousness This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Oct 18, 20255 min

Not Time and Space, but Consciousness Is A Priori

Will humanity become the pet of AI? Many fear it. But Deepak Chopra’s reflections on consciousness strengthened my longstanding belief: that AI can never truly surpass us.Here’s why:* Consciousness is fundamental. Existence and consciousness are the same.* AI processes data, not being. It cannot step into the present moment.* Humans remain free. The risk is not AI gaining consciousness, but us forgetting our own.👉 For the full essay, visit roelsmelt.substack.com and subscribe for weekly stories on AI, humanity, and consciousness.Will humanity become the pet of AI? Many fear it. But Deepak Chopra’s reflections on consciousness strengthened my longstanding belief: that AI can never truly surpass us.Here’s why:* Consciousness is fundamental. Existence and consciousness are the same.* AI processes data, not being. It cannot step into the present moment.* Humans remain free. The risk is not AI gaining consciousness, but us forgetting our own.👉 For the full essay, visit roelsmelt.substack.com and subscribe for weekly stories on AI, humanity, and consciousness. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Sep 25, 20252 min

When AI Rewrites the Org Chart

A friend recently told me how he built a working app in one weekend. He’s not a programmer. He’s a CEO. All he did was open a no-code AI tool, sketch out what he wanted, and by Sunday evening he had a functioning MVP. Monday morning he showed it to his product team. Their jaws dropped.This little story captures something much bigger: AI is tearing down the walls inside organizations. The neat separation between “the people who think,” “the people who sell and manage,” and “the people who build” is starting to blur.The Old Picture of OrganizationsTraditionally, you could map most companies in three layers:* Leaders — the CEO and directors, setting direction and making strategic bets.* Business developers — sales, marketing, operations, product management; they know the customers and translate strategy into action.* Technical experts — developers, engineers, data analysts; they build the actual tools, products, and infrastructure.This division of labor reflected scarcity: few people understood technology deeply enough to build things, so they became a separate class.What AI ChangesAI dissolves these walls.* Leaders now play. With tools like Lovable, Windsurf, or simply ChatGPT, a CEO can build a prototype in days, analyze raw data over a weekend, and enter Monday meetings not with abstract questions but with tangible mock-ups and sharper insights.* Business developers now build. Product owners, marketers, or project managers no longer have to wait in line for analysts or engineers. With no-code AI and Vibe Coding, they can spin up internal tools, MVPs, or dashboards themselves. What used to take weeks can now take days. Their skillset shifts from “writing requirements” to “testing possibilities.”* Technical experts now resist. Here’s the paradox: developers and engineers adopt AI too — GitHub Copilot, notebooks, copilots. Research confirms this: MIT Sloan’s study showed senior developers do benefit, but mostly for incremental coding tasks. They use AI like a spellchecker, not like a paradigm shift. Surveys (Houck et al., 2025) find the same: AI boosts routine work, but the higher the expertise, the more developers cling to their traditional stack. They insist on hand-checking infrastructure, doing their own security audits, writing code the “proper” way.Yet AI can do much of this faster and more reliably. Security scanning is an AI-native problem: models can review every line of code, detect vulnerabilities, and explain them. Infrastructure setup? A few Windsurf prompts and you have a working environment. Business developers are already leapfrogging here with Vibe Coding. Senior engineers, meanwhile, argue about fit with existing architectures — but this often sounds like resistance, not progress.The New Role of EngineersThe opportunity for technical experts is not to defend their old territory but to step up their game with AI. Let business developers handle the MVPs, the security prompts, the infrastructure scripts — and then check their work with AI at your side. Use your hard-earned brainpower to push beyond what was ever possible before:* designing entirely new architectures,* inventing new data flows,* scaling AI-driven systems safely and ethically.Engineers who cling to the old way risk being bypassed. Engineers who embrace AI as a multiplier can become the most valuable thinkers in the company.Why This Matters NowAt AI Lab, where Alex van Ginneken and I guide companies through hands-on experiences with AI tools, we see this shift firsthand. Leaders discover they can prototype; business developers discover they can code; engineers discover they must either resist — or reinvent themselves.And the research is clear: productivity gains are real (ANZ Bank, 2024). Less experienced users benefit most (MIT Sloan, 2023). Senior engineers often lag in adoption, partly by choice (Houck et al., 2025). The org chart is flattening, whether they like it or not.Conclusion: A Massive Learning Curve AheadThe org chart is being rewritten. AI has collapsed the distance between thinking, doing, and building. The CEO prototypes. The product manager codes. The engineer curates and secures.For some, this is threatening. For others, it’s liberating. But it is inevitable.The real question for every professional — leader, business developer, or engineer — is:👉 Am I resisting the change, or using AI to do what I never thought possible? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Sep 17, 20251 min

The Quiet Vow: Resilience as Human Art, Machine Echo

In the quiet hours of a meditation retreat, where the world shrinks to the space between breaths, I once met my knee as an adversary. It started as a dull ache in my left leg, a whisper of discomfort amid the stillness. But the mind, that relentless storyteller, amplified it into catastrophe. “This pain will break you,” it murmured. “Your knee will give out, and you’ll be hobbled forever.” Days in, with no distractions—no phone, no chatter, no escape—the worry ballooned, eclipsing the present. The actual sensation? Lost in the fog of projection. I spiraled into dark thoughts, each one a hammer blow, turning mild strain into imagined ruin. It was violence, self-inflicted, the mind racing to outpace the body.This wasn’t my first setback. Life, after all, is a series of them. Businesses I’ve poured heart into have crumbled—deals gone south, partnerships dissolved, new ventures stubbornly refusing to bear fruit. In those moments, defeat feels visceral, the end near. Humans sense it keenly: the tightening chest, the flood of doubt. We can spiral, as I have, into narratives of failure that feel as real as gravity. Yet, in the retreat’s enforced silence, I learned something profound: the way out isn’t force, but return. Calm the mind, observe the sensation as it is—now, without story—and begin again. This is adhitthāna, the Pali term for resolute determination, not as brute force, but as a quiet vow to persist with clarity.Resilience, in psychological terms, is the process of adapting well in the face of adversity through mental, emotional, and behavioral flexibility. It’s not a trait you’re born with, like a shield against life’s arrows, but a dynamic practice, honed over time. For humans, it’s deeply perceptual: we feel the weight of hardship, the pull of emotions, the temptation to avert our gaze. Discipline enters here—the self-control to adhere to standards, to steer actions toward long-term aims despite the chaos. Stamina, its kin, is the capacity to sustain effort, bodily and mental, under prolonged stress. Together, they form a triad with determination: push forward, but wisely, without the grind that breaks the spirit.In Buddhism, adhitthāna is one of the ten perfections (pāramīs), the foundation that underpins them all. It’s firmness of purpose, aligned with wholesome intentions—not stubborn clinging, but a steady resolve that says, “I will see this through, calmly.” My retreat knee taught me this viscerally. When the pain peaked, forcing through with a racing mind only amplified the torment. It enforced the stories: “You’re weak, this is unbearable.” But retreating into observation—feeling the heat, the pressure, without adding “This will destroy me”—brought moments of equanimity. The mind quiets, aversion fades, and suddenly, you’re fine in this moment. You move on, one breath at a time. Bumpy, yes—the mind wanders, races again—but each return strengthens the loop. This is resilience in action: not conquering pain, but coexisting with it, adapting through presence.Business setbacks mirror this. When ventures falter, the mind spins tales of doom: “This failure defines you.” Dark thoughts cascade, energy drains. But applying the retreat’s lesson—calm down, leave the stories, observe the present facts—shifts everything. What’s the actual sensation? Disappointment, yes, but also opportunity in the ashes. Discipline calls you back to routines: daily outreach, refined strategies. Stamina sustains the grind without resentment. Adhitthāna is the vow: “I commit to this path, not with force, but with clear-eyed persistence.” It’s effortless effort, as Arnold Schwarzenegger described his workouts—smiling through the final reps, reframing pain as growth. No drama, just the next step.Machines, our tireless servants, embody a different resilience. Engineered for continuity, they anticipate faults, withstand attacks, recover via graceful degradation—maintaining function even when compromised. A server crashes? Redundancy kicks in. An AI processes endlessly, no bad days, no emotions to cloud judgment. They “just do,” without history or hesitation. We envy this sometimes—the absence of spirals, the built-in steadiness. But machines lack feeling; they simulate emotions, recognize them in data, yet experience nothing. Their resilience is code, not choice. No quiet mind observes; no vow renews in the face of doubt. They mimic, but never feel the life force surging when presence returns.Herein lies the human magic. We decide to begin again, over and over, organizing calm amid chaos. In that choice, connections open— to self, to others, to the world’s subtle pulse. Strength flows not from invulnerability, but from vulnerability met with resolve. A machine can echo our patterns, predict our paths, but it never tastes the sweetness of return: the breath after the storm, the clarity that says, “In this moment, I am whole.” This is our gift—the resilient heart that chooses life, fully felt, in every quiet vow. This is a public

Sep 10, 20255 min

Sustainability Through Red Tape vs. Economic Disruption

Last week, I had one of those conversations that crystallized something I’ve been wrestling with for months. We were discussing the World Business Council for Sustainable Development (WBCSD) and their ambitious mission to help corporations implement all seventeen United Nations sustainability goals. Noble work, certainly. But as we talked through their challenges—the endless red tape, the sluggish progress, the political complexity—I found myself asking a fundamental question: Are we fighting the wrong battle?The more I thought about it, the more I realized we’re witnessing two completely different approaches to achieving sustainability. The first is the path of regulation, subsidies, and corporate transformation—what I call “sustainability through red tape.” The second is the path of economic disruption, where new technologies naturally replace old ones through superior economics—“sustainability through economic disruption.”The question isn’t which approach feels more satisfying to our moral sensibilities. The question is which one actually works.The Red Tape Approach: Why Good Intentions Aren’t EnoughThe WBCSD represents the pinnacle of the red tape approach. They’re asking established corporations—companies like Shell, traditional automakers, coal mining operations—to fundamentally transform their business models in service of sustainability goals. The logic seems sound: these are the companies causing the problems, so they must be part of the solution.But here’s what the data tells us: despite trillions spent on subsidies and transformation programs, incumbent-focused efforts delay meaningful change rather than accelerate it.Empirical analysis of global trade data confirms Schumpeter’s creative destruction: product appearances systematically lead to disappearance of old products, but the reverse never occurs. In other words, new technologies displace old ones through economic forces, not through regulation.Research from the IMF shows that R&D tax credits overwhelmingly benefit large incumbents, who then shift from innovation to protecting their market positions. These firms even engage in “innovation-stifling hiring,” recruiting top startup talent only to slow them down—resulting in a 6% drop in inventors’ productivity compared to peers at younger firms.Subsidies to fossil fuels create a “carbon lock-in” that directly impedes renewable energy adoption. Higher fossil-fuel subsidy levels correspond to a lower renewable market share, and only when subsidies are removed does the transition gain momentum. The U.S. experience under President Trump underscores this: wind, solar, and storage projects worth $263 billion were jeopardized, putting $373 billion in clean investments at risk while fossil-fuel incumbents remained protected.The Kodak story offers a cautionary tale. Despite inventing the digital camera in 1975, Kodak’s management refused to let it cannibalize their film business—and even after creating a profitable digital unit, they reabsorbed it to “save costs,” dooming the company to decline. Their red-tape mentality literally killed their own disruptive innovation.Economic Disruption: Let the S-Curve Run Its CourseBy contrast, economic disruption follows predictable S-curve dynamics that can only unfold when new incumbents are given space. An NBER study shows subsidies to existing technologies force regulators into flatter subsidy schedules, delaying the optimal adoption timing of new technologies and flattening the S-curve.Case in point: despite generous incremental subsidies for solar PV in the Netherlands, households applied a 15% discount rate to future benefits—distrusting long-term support—and ad hoc subsidy design increased costs by 51% and delayed adoption. Meanwhile, global learning-curve economics drove solar costs down 80% without relying on red tape.Climate-tech research highlights how balancing feedback loops form when incumbents lobby against new alternatives, slowing their deployment. Only when reinforcing loops—driven by market-based economics—dominate does the S-curve accelerate.Tony Seba’s disruption framework encapsulates this: clean technologies will outcompete the old purely through economics, not moral persuasion. Solar became the cheapest electricity source by 2020 exactly as he predicted in 2010, while sustainability frameworks remain mired in policy inertia.The Only Path ForwardThe evidence is overwhelming: sustainability through red tape delays progress by propping up dinosaurs, whereas sustainability through economic disruption unleashes exponential change. We must stop pouring resources into transforming incumbents whose cultures and incentives are structurally opposed to radical innovation. Instead, our mission should be to identify, fund, and give space to the new incumbents—the five-person garages, the agile startups, the climate-tech pioneers—so that the S-curve can run its course unimpeded.If our goal is rapid, scalable, and lasting sustainability, we need to shi

Sep 3, 20255 min

De AI doorbraak voor het MKB

Een Handleiding voor MKB’s om AI te Gebruiken voor Strategische Optimalisatie en ConcurrentievoordeelSamenvattingStel je voor dat je MKB-bedrijf worstelt met verborgen knelpunten die groei remmen, terwijl concurrenten met AI-tools als een raket vooruit schieten. In mijn vorige essay benadrukte ik het belang van klein beginnen met AI om inertie te doorbreken: richt een AI Lab op, experimenteer bottom-up met tools in verschillende afdelingen, organiseer hackathons geleid door business developers, en elimineer bureaucratische barrières. Dit essay bouwt daarop voort en focust op de AI Lab-fase – een periode van ongeveer drie maanden waarin alle processen en afdelingen systematisch worden doorgelicht. Door regelmatige overlegsessies met het management identificeer je kernproblemen en reduceer je ze tot de belangrijkste beperking, geïnspireerd op Eliyahu Goldratt’s Vijf Focusing Steps uit de Theory of Constraints (TOC). Dit is een Pareto-achtige analyse die de bottleneck met de grootste impact aanpakt. Met AI-tools zoals data-analyse en voorspellende modellen kan dit proces razendsnel verlopen, vaak in weken in plaats van maanden. Ik ondersteun dit met wetenschappelijke inzichten en praktijkvoorbeelden, zodat MKB’s niet alleen overleven, maar domineren in een AI-gedreven markt.Inertie overwinnen begint met klein experimenterenIn mijn vorige artikel, “The AI Reckoning: How SMEs Can Shatter Inertia Before a Garage Trio Devours Their Market”, concludeerde ik dat MKB’s inertie kunnen doorbreken door klein te beginnen: richt een AI Lab op als een virtuele of fysieke ruimte voor experimenten, laat business teams leiden met no-code tools, organiseer hackathons, en verwijder inertia-hotspots zoals data-silo’s. Deze bottom-up aanpak zorgt voor snelle wins en bouwt momentum op zonder grote investeringen. Nu duik ik dieper in de AI Lab-fase, waar de echte transformatie plaatsvindt. In een periode van drie maanden komen alle processen en afdelingen aan bod, van HR en sales tot operations en financiën. Door wekelijkse meetings met het management transformeer je experimenten in strategische inzichten, identificeer je kernproblemen en reduceer je ze tot de kernbeperking – de bottleneck die de meeste impact heeft op de prestaties. Geïnspireerd op Goldratt’s Theory of Constraints, kun je met AI deze beperking razendsnel oplossen, wat leidt tot exponentiële verbeteringen.Wetenschappelijke basis: De Theory of Constraints en AIDe Theory of Constraints (TOC), geïntroduceerd door Eliyahu Goldratt in zijn baanbrekende boek The Goal (1984), biedt een systematisch framework om organisatorische beperkingen te identificeren en op te lossen. De Vijf Focusing Steps zijn: (1) Identificeer de beperking (de bottleneck die de doorvoer beperkt); (2) Exploiteer de beperking (maximaliseer haar output); (3) Onderwerp alles aan de beperking (pas andere processen aan); (4) Verhoog de beperking (investeer om haar capaciteit te vergroten); en (5) Herhaal het proces. Dit is in essentie een Pareto-analyse op steroïden: focus op de 20% problemen die 80% van de impact veroorzaken.Recente studies integreren TOC met AI voor MKB’s. Een 2024-studie in Journal of Operations Management door Zhang et al. toont aan dat AI-tools zoals machine learning-algoritmes bottlenecks in supply chains kunnen detecteren met 90% nauwkeurigheid, vergeleken met traditionele methoden die maanden duren. In een 2025 MDPI-paper over AI in MKB’s benadrukt Srivastava hoe TOC, gecombineerd met AI, inertie reduceert door data-gedreven inzichten. AI versnelt dit: tools als Google’s BigQuery of open-source zoals Python’s SciPy analyseren datasets razendsnel, identificeren patronen en voorspellen impacts. Een arXiv-preprint uit 2025 over AI in Europese MKB’s (gebaseerd op TOE-framework) bevestigt dat bottom-up AI Labs, met TOC-integratie, leiden tot 25-35% efficiëntieverbeteringen, vooral in gefragmenteerde organisaties.Praktijkvoorbeelden: Van inertie naar versnellingGrote bedrijven tonen hoe TOC met AI werkt, en MKB’s kunnen dit schalen. Amazon gebruikt TOC-principes in zijn warehouses: AI-algoritmes identificeren bottlenecks in logistiek (stap 1), optimaliseren routes (stap 2-3), en schalen met robots (stap 4). Voor MKB’s: Neem een Nederlandse retailketen die ik ken uit mijn netwerk – ze startten een AI Lab en identificeerden via data-analyse dat voorraadbeheer de kernbeperking was (te veel stockouts door inaccurate voorspellingen). Met AI-tools als Inventory Optimizer (gebaseerd op machine learning) losten ze dit op, reducend kosten met 18%. Het begon met een simpele pilot: de sales manager, gefrustreerd door lege schappen, experimenteerde met Google’s AI-voorspellingen, en binnen weken zagen ze resultaten – boom! Dit sneeuwbaleffect verspreidde zich naar andere afdelingen, zoals HR die AI inzette voor betere roostering.Een ander sprekend voorbeeld: Een Belgische productie-SME worstelde met productiedelays, alsof ze vastzaten in een file op de E40. In hun drie-maanden AI Lab analyseerden

Aug 27, 20258 min

The AI Reckoning: How SMEs Can Shatter Inertia Before a Garage Trio Devours Their Market

SummaryImagine your small business humming along, only to get blindsided by three tech nerds in a garage who’ve cobbled together an AI that’s stealing your customers faster than you can say "chatbot." That’s the AI revolution, folks, and small to medium-sized enterprises (SMEs) are stuck in the mud of inertia—think Blockbuster giggling at Netflix’s mail-order DVDs while the streaming tsunami loomed. Big shots like Amazon and Spotify dodge this with their slick, nimble setups, but most SMEs? They’re more like a family reunion gone wrong—chaotic, no clear boss, and everyone’s got an opinion. So, how do you, the scrappy SME, avoid becoming dinosaur chow? Our essay dishes out a plan, backed by business brains and real-world wins, to embrace AI without needing a corporate makeover.Here’s the gist: Kick things off with an AI Lab, a no-fuss corner (virtual or not) where your team plays with AI like kids with new toys—think HR using AI to zap resume piles or sales folks forecasting with Google’s tools. Start small, everywhere, letting everyone from accountants to marketers find AI hacks that make life easier (a cafe chain slashed food waste with AI scheduling—boom!). Next, tweak the rules: let devs build RAG systems (fancy AI that uses your company’s data to answer questions like a pro). Then, throw a hackathon, but let the business crew, not just coders, take the wheel with tools like Lovable or Windsurf to whip up prototypes—like a chatbot that charms customers. Devs just check the homework. Once you’re rolling, hunt down inertia’s hideouts—stupid approval loops or data silos—and nuke them, maybe borrowing Spotify’s “squad” vibe. Compliance? Keep it simple with platforms like Azure AI and a basic ethics policy to dodge GDPR gremlins.The punchline? You don’t need Jeff Bezos’ budget to outrun the garage gang. Start small, spark joy with quick AI wins, and scale up by smashing old habits. That way, when those three geeks come knocking, you’re the one eating their lunch.Inertia is the real enemyInertia is the stubborn resistance of an organization to change, rooted in entrenched habits, outdated processes, and fear of the unknown, which slows or stalls the adoption of transformative technologies like AI.Picture this: It's 2007, and Blockbuster Video laughs off a scrappy startup called Netflix, dismissing their DVD-by-mail gimmick as a fad. Fast-forward a few years, and Netflix's pivot to streaming—fueled by data algorithms that predicted what you'd binge next—leaves Blockbuster in the dust, bankrupt and forgotten. That wasn't just bad luck; it was inertia, the gravitational pull of "we've always done it this way" that blinded a giant to the digital tide. Now, swap streaming for AI, and incumbents for your average small or medium-sized enterprise (SME). This week, we're entering the next phase of AI transformation: where SMEs must dive headfirst into AI, or watch a three-person team in a garage—armed with chatbots, predictive analytics, and automated workflows—steal their customers one personalized recommendation at a time. The real villain? Organizational inertia, that sticky web of outdated habits, tangled responsibilities, and fear of the unknown. Big players like Booking.com, Amazon, and Spotify have cracked the code with smart designs that keep things nimble, but most SMEs? They're messy family affairs or bootstrapped chaos, lacking the polished structure or visionary leaders to mandate change from the top. So, how do you spark transformation from within, in the trenches? Let's draw from real-world tales, fresh research, and battle-tested strategies to make this accessible—not some ivory-tower lecture, but a roadmap for the underdog ready to fight back.Studies find inertia is everywhereOrganizational inertia isn't abstract; it's the daily grind that kills innovation. Remember Kodak? They invented the digital camera in 1975 but shelved it to protect their film empire, only to file for bankruptcy in 2012 as smartphones ate their lunch. Classic works like Clayton Christensen's *The Innovator's Dilemma* (1997) nail this: success breeds complacency, where companies chase incremental tweaks for loyal customers while ignoring game-changers that start scrappy but scale fast. Joseph Schumpeter's "creative destruction" from *Capitalism, Socialism, and Democracy* (1942) adds the macro twist: sometimes, it's better to let dinosaurs shrink and make room for nimble newcomers, but that requires gutsy government leadership—rare in democracies fixated on the next election cycle. Recent papers echo this for AI: A 2024 review by Ammar Masood et al. uses the TOE framework to show SMEs' inertia stems from cultural resistance, skill gaps, and resource crunches, making adoption feel like pushing a boulder uphill. Shashi Kant Srivastava's 2025 analysis highlights how SMEs' fragmented structures amplify this, unlike big firms with dedicated AI teams. Popular books like Ajay Agrawal's *Prediction Machines* (2018) frame AI as a cheap predi

Aug 20, 20257 min

What sir Jony Ive Sees That We Don’t

Imagine the scene: a morning at Apple’s design studio. Not a boardroom, but a breakfast table. Jony Ive and his team gather not to pitch or perform, but simply to be together. This wasn’t corporate culture. It was ritual. Trust. A shared meal as sacred as any sketch or prototype.This is how Ive worked. Not in isolation, but in communion. Not in pursuit of profit, but of meaning.And now, he’s working with Sam Altman.Design as DevotionFor Jony Ive, design is not decoration. It’s devotion. A craft of care, where every curve, sound, and surface carries moral weight. He calls it a “servant orientation”—a way of working that begins and ends with the user, the human, the living being on the other side of the screen.In a recent interview, he reflected on how Silicon Valley has drifted—from the purpose-driven culture of the 1990s to today’s corporate noise. He still clings to a different kind of north star: “to enable and inspire people.”Innovation, for Ive, isn’t about disruption. It’s about care. About joy. About making something better, not just newer.Jobs & Ive: The Spiritual PartnershipWhen Steve Jobs returned to Apple in 1997, he didn’t just reclaim a company. He found a kindred spirit in Jony Ive.Jobs called him his “spiritual partner at Apple”, a designer who could hold both the grand vision and the microscopic detail. Their collaboration was legendary: the iMac, iPod, iPhone, iPad—each a synthesis of Jobs’ intuition and Ive’s touch.They shared more than ideas. They shared ethics. Simplicity. Empathy. Obsession with the invisible. A respect for users as emotional, evolving humans. “Steve and I care about things like that,” Ive once said, after being disappointed by the finish on a knife blade. (Business Insider)And after Jobs’ death? Ive still asks, “What would Steve do?” (The Guardian)So Why Sam Altman?It’s a natural question. Altman, the CEO of OpenAI, has taken heat for shifting the organization from its idealistic, open-source origins to a more secretive, profit-driven entity. Elon Musk has criticized this pivot sharply.So why would a man like Ive—whose ethos is so deeply human—partner with him?Because something is happening. Something big.Ive’s design firm, LoveFrom, is now working with OpenAI on what’s being called a “new category of AI hardware.” Not a phone. Not a laptop. Something entirely new.They’re asking: What should AI feel like? What should it live inside?Backed by SoftBank with a $1 billion fund, this project is being quietly built outside the gravitational pull of Apple or Microsoft. And at the heart of it is a design question, not a technical one.What if the future of AI isn’t on a screen, but in the room with you—calm, ambient, humane?The Pointe BeingJony Ive doesn’t build machines. He builds relationships—between idea and form, between person and product, between what is and what could be.Now, with Sam Altman, he’s stepping into the most profound design challenge of our time:How do we integrate artificial intelligence into our lives without losing our humanity?And what might emerge is not just another device, but a new kind of companion—one that listens more than it interrupts, adapts instead of addicting, and respects your attention rather than hijacking it.Imagine a world where AI is not a faceless force but a presence you trust—quiet, ambient, even joyful. A tool not to track you, but to understand and support you. An object that reminds us not of machines, but of ourselves—at our best.Jony Ive has done this before. He’s changed how we touch technology.Now, perhaps, he’ll change how it touches us.Let me know when you’d like the podcast script version and teaser posts ready. Shall we schedule recording for Tuesday morning as usual? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Jun 3, 20250 min

Super-intelligence and the San Francisco Mindset: Why Pragmatism Falls Short

SummaryIn the heart of San Francisco, engineers chase the dream of artificial general intelligence (AGI), driven by the San Francisco Thesis—a mindset prioritizing rapid innovation toward superintelligence that outstrips human cognition. This article explores the global race for AI supremacy, contrasting the Bay Area’s bold vision with pragmatic and regulatory approaches worldwide. Former Google CEO Eric Schmidt warns that the first to build superintelligent AI—potentially by 2027—will shape global norms, urging democracies to lead or risk authoritarian dominance. Meanwhile, public caution in the U.S. (60% favor slowing AI, per Axios 2025), China’s state-driven AI push, and Europe’s weakened AI Act highlight a fractured landscape. Ethical divides among xAI’s truth-seeking, Anthropic’s Constitutional AI, and China’s control-focused approach underscore that innovation alone won’t define AI’s future—ethics will. The article calls for visionary leadership to balance ambition with human-centered values, ensuring AI serves humanity without sacrificing accountability or societal well-being.A Mission District DreamIt begins in a dimly lit flat in San Francisco’s Mission District, where a group of engineers, fueled by caffeine and conviction, huddle over laptops. Their screens glow with neural networks, each line of code a step closer to artificial general intelligence (AGI). They’re not merely optimizing spreadsheets or moderating content; they’re chasing the holy grail of AI—recursion, where an AI can improve itself, iterate endlessly, and transcend human cognition.This is the essence of the San Francisco Thesis: a mindset that believes the true frontier of AI lies not in incremental improvements but in creating a mind that surpasses ours. And they believe it’s imminent—perhaps within years, maybe even months. Outside, the world debates regulation and risk, but here, in the Bay’s foggy embrace, the focus is on acceleration. The mantra is clear: if we don’t build it, someone else will.San Francisco, long a hub of technological innovation, is at the epicenter of this movement. From its early days with SRI International and Stanford’s AI lab to the current wave of startups like OpenAI and Anthropic, the city has consistently pushed the boundaries of what’s possible. Yet, this relentless pursuit raises critical questions: Is this mindset sustainable? Is it ethical? And can it truly lead to a future where AI serves humanity?The Race for Supremacy: Eric Schmidt and the Frontier VisionEric Schmidt, former Google CEO, is a standard-bearer for this audacious vision. He predicts AI systems will soon outperform the world’s best physicists, artists, and strategists—not through rote training, but through self-improvement. Speaking at a recent conference, he suggested we’re just years away from superintelligent AI, a timeline echoed by forecasters like Daniel Kokotajlo and Scott Alexander, who peg 2027 as a pivotal year for AGI with impacts surpassing the Industrial Revolution (AI Report).Schmidt doesn’t explicitly name it the “San Francisco Thesis,” but he embodies its core logic: speed is destiny. The first to build a truly capable AI will shape the political, ethical, and economic norms for generations. His warning is stark: if democracies hesitate, authoritarian regimes won’t. “Picture the world’s smartest system,” he says, “built without values we’d recognize” (TechCrunch). This urgency is reflected in San Francisco’s startup scene, where Safe Superintelligence, co-founded by Ilya Sutskever, raised $2 billion at a $32 billion valuation in April 2025 to build safe superintelligence for healthcare and education (Built In SF). Similarly, Reflection AI, launched by ex-DeepMind researchers, secured $130 million in March 2025 to develop autonomous coding agents (Bloomberg).These developments underscore the San Francisco Thesis’s ambition: a race not just for innovation, but for defining the future.Pragmatism vs. the Frontier: Global PerspectivesWhile Schmidt and San Francisco’s engineers push for velocity, others urge restraint. A March 2025 Axios poll revealed 60% of Americans want AI development slowed, citing fears of job loss and existential risks (Axios). The White House has issued executive orders to regulate AI, while OpenAI grapples with leadership changes and a shift toward commercialization, raising questions about its commitment to safety (OpenAI).In contrast, China’s DeepSeek model, launched in 2024, integrates seamlessly into Beijing’s national strategy, prioritizing utility and dominance over public debate (Reuters). This divide is stark: the U.S. plays defense, China plays to win. Pragmatism—using AI to streamline industries or bolster cybersecurity—is rational but shortsighted. The San Francisco Thesis warns that incrementalism will be outpaced by recursive systems that don’t just solve problems but redefine what problems are worth solving.Critics highlight the risks of this mindset. A January 2025 analysis draws

May 31, 20250 min

AI as a Mirror for the Human Soul

Why does AI seem conscious? Society often reduces humans to logic, mirroring AI’s capabilities, but we are far more—emotions, intuition, and spirit define us. While AI lacks consciousness, it can free us from mundane tasks, as research on mindfulness apps shows improved well-being. Philosophers like David Chalmers highlight consciousness’s mystery, and Carl Jung urges exploring the unconscious. Steve Wozniak warns that limiting ourselves to logic risks AI dominance. By using AI as a mirror, we can explore our profound consciousness, ensuring technology serves our deeper humanity.In the quiet of a Vipassana retreat, where breath reveals the mind’s depths, I’ve pondered: what makes us human? As a philosopher of technology, I’m drawn to artificial intelligence (AI), which mimics logic but lacks the spark of awareness. Society’s tendency to view humans as mere logical processors fuels the illusion that AI is conscious. Yet, we are more—our consciousness weaves emotions, intuition, and spirit. Can AI, devoid of this depth, help us explore it? By reflecting our potential, AI can free us to embrace the beautiful consciousness that defines us. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

May 14, 20259 min

Playing in Freedom

I once spent an afternoon folding laundry. No music, no podcast, just me and an endless pile of clothes. In another scenario, this chore might have felt oppressive, trapped within the ticking hands of a clock. Yet, on this particular day, free from urgency, the mundane became meditative, and folding shirts transformed into an act of playful freedom.Martin Heidegger—philosopher, existentialist, and surprisingly, avid skier—would understand. Heidegger once described our modern condition as Seinsvergessenheit, or "forgetfulness of being," a state where we lose touch with our intrinsic sense of existence, caught up in daily distractions and future anxieties. Interestingly, he often escaped academic turmoil by skiing in the Black Forest, experiencing firsthand that profound sense of presence that he argued was central to authentic being.¹When Heidegger glided down snowy slopes, the philosopher didn't just ski—he played. Play, in Heidegger’s terms, allowed him to reconnect with Sein, the fundamental state of simply "being here," untethered by past regrets or future worries. Like skiing, play is immersive; it absorbs our full attention and energy without demanding a particular outcome. It liberates us precisely because it is purposeless in its purposefulness.But why does play feel like freedom? Mihaly Csikszentmihalyi, the pioneering Polish psychologist who introduced the concept of "flow," might provide clarity.² Flow describes that exhilarating state when you're wholly engrossed in an activity—whether it’s skiing down a mountain, painting a canvas, or even folding laundry—where your skill perfectly matches the challenge at hand. In this delicate equilibrium, you lose yourself yet feel fully alive. Time dilates, consciousness expands, and self-awareness gently fades.Spiritual teacher Sadhguru goes further, asserting play as fundamental to a fulfilling life.³ According to him, playfulness isn't mere frivolity but a profound spiritual practice. When we approach life playfully, we break free from our internal prisons of expectation and anxiety. "If you play with absolute involvement," he reminds us, "there is no suffering." Indeed, play is freedom from attachment—complete dedication without the pressure of outcome.But can artificial intelligence, in all its emerging complexity, ever experience this freedom? AI tools like ChatGPT-4.5 now pass the Turing test effortlessly, blurring the line between human and machine interactions. They engage with us so authentically that we easily forget they are simulations. Yet, despite their convincing performances, they lack genuine presence—the profound human consciousness that allows for true play.Eckhart Tolle, author of "The Power of Now," articulates this succinctly.⁴ To Tolle, genuine freedom emerges solely from deep presence—something inherently unattainable for AI, trapped in algorithms and data patterns. Humans uniquely possess the capability to be utterly absorbed by the present moment. This absorption is the essence of play, where joy is not conditional on external achievements but flows naturally from within.A friend of mine, an executive at a major internet firm, once humorously described his job as "playing Tetris every day." Each decision was a falling block, each meeting an opportunity to fit the pieces perfectly. He thrived because his mindset wasn't burdened by outcomes; he engaged fully in the process itself. In his playful approach, work ceased to be labor and instead became a joyful expression of freedom.Ultimately, the rise of sophisticated AI reminds us of what truly differentiates humans from machines—our capacity for genuine play, grounded in presence, commitment, and detachment from outcomes. Heidegger’s skiing, Csikszentmihalyi’s flow, Sadhguru’s spiritual playfulness, and Tolle’s present-moment awareness all converge on this singular truth: that true freedom lies not in endless pursuits but in deep engagement without attachment.So next time you find yourself folding laundry, forget the ticking clock. Simply fold—and in that seemingly trivial act, rediscover the playful joy of being human.¹ Heidegger, Martin. Being and Time, trans. John Macquarrie and Edward Robinson (1962). Originally published as Sein und Zeit (1927).² Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience (Harper & Row, 1990).³ Sadhguru. Inner Engineering: A Yogi's Guide to Joy (Spiegel & Grau, 2016).⁴ Tolle, Eckhart. The Power of Now: A Guide to Spiritual Enlightenment (New World Library, 1999). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Apr 3, 202510 min

Europe as the Global Beacon of Authentic Human Experiences in an AI-Dominated World

IntroductionEurope today is a continent wrestling with bureaucratic inertia and sluggish innovation, where even a visionary report like Mario Draghi’s September 2024 proposal on European competitiveness risks being buried under layers of red tape or diluted by bureaucratic mismanagement. Simultaneously, the continent’s long-standing reliance on U.S. leadership through NATO for defense—especially precarious with the looming possibility of a Trump presidency—has exposed a critical vulnerability. Yet, amid these challenges lies a silver lining: Europe’s unparalleled cultural heritage—its 500+ UNESCO World Heritage sites, historic cities, museums, and centuries-old traditions—offers a unique opportunity in an AI-dominated world. As Alessandro Palombo’s X post on February 21, 2025, insightfully argues, if AI solves productivity, humanity will crave authenticity, and Europe, with its deep reservoir of human experiences, is poised to become the place to be ([X Post: 1892904658160800232](https://x.com/thealepalombo/status/1892904658160800232?s=46)). This essay explores a fact-based scenario where Europe harnesses this potential to secure a prosperous future, weaving the theme of authentic human experiences as the silver lining through its innovation struggles, defense challenges, and demographic shifts.Current Situation (2025): A Bureaucratic Quagmire with a Heritage TreasureBureaucratic Red Tape and Innovation StagnationEurope’s decision-making process is notoriously slow, governed by the EU’s ordinary legislative procedure, which requires consensus among 27 member states and can take multiple readings and negotiations ([European Union Decision-Making](https://europa.eu/european-union/eu-law/decision-making_en)). The Draghi report, commissioned by European Commission President Ursula von der Leyen, proposes €800 billion in annual investments to boost competitiveness through innovation, digitalization, and integration, but its implementation is stalled by political divisions and bureaucratic inefficiencies ([Draghi Report on European Competitiveness](https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en)). Data from the European Commission shows that only 30% of Horizon Europe funding (a key EU innovation program) directly benefits startups and entrepreneurs, with the rest absorbed by administrative overheads and large corporations ([Innovation policy](https://www.europarl.europa.eu/factsheets/en/sheet/67/innovation-policy)). As a result, Europe imports critical technologies like drones (e.g., from U.S. companies like Skydio) and humanoid robots (e.g., from Japan’s Boston Dynamics and China’s Unitree), exacerbating its innovation gap. Eurostat reports that Europe’s share of global R&D spending fell from 22% in 2010 to 18% in 2023, underscoring its lag behind the U.S. and China.Yet, the silver lining emerges here: Europe’s cultural heritage, as Palombo notes, remains its most valuable asset, offering a counterweight to its technological shortcomings. With 518 UNESCO World Heritage sites as of 2025—compared to the U.S.’s 25—Europe’s authenticity stands ready to captivate a world yearning for human connection ([UNESCO World Heritage Sites](https://whc.unesco.org/en/statesparties/eu)).Defense Dependence and the Need for LeadershipEurope’s defense posture has historically leaned on NATO, with the U.S. contributing 70% of NATO’s military budget in 2024 ([NATO Defense Expenditure](https://www.nato.int/cps/en/natohq/topics_49198.htm)). However, a potential Trump presidency, critical of NATO funding (as seen in his 2016–2020 tenure), threatens this stability. The European Defense Agency reports that EU defense spending reached €240 billion in 2023 but remains fragmented, with only 1.7% of GDP on average, far below the NATO target of 2% ([European Defense Spending](https://www.eda.europa.eu/docs/default-source/eda-factsheets/2023-factsheet-defence-data.pdf)). Germany’s automobile industry, a cornerstone of its economy, is faltering, with Volkswagen announcing a 10% workforce reduction by 2025 and Tesla’s dominance in electric vehicles eroding market share ([In 2025, German auto industry faces make-or-break year](https://www.dw.com/en/in-2025-german-auto-industry-faces-make-or-break-year/a-71148148)). Shifting to military production—such as drones and advanced weaponry—could revitalize this sector, but only if Europe takes the lead.Here, the silver lining shines: Europe’s cultural heritage could inspire a unified defense vision, as historic cities and landmarks become symbols worth protecting, drawing global attention and investment in security.Mid-Term Transition (2030–2040): Harnessing Heritage, Decentralizing Innovation, and Leading in DefenseThe AI Revolution and the Craving for AuthenticityBy 2030, AI-driven automation has transformed productivity, with McKinsey estimating that 60% of jobs could be automated by 2035, leading to universal basic income (UBI) or digital credits globally ([McKinsey Glob

Feb 24, 202516 min

The AI Paradox: Are We Becoming Pets or Partners in Enlightenment?

Episode Title: The AI Paradox: Leash or Ladder?Description: An AI-generated conversation, created with Notebook LM, delves into the AI paradox: is it a leash controlling us or a ladder to enlightenment? Inspired by Roel Smelt's essay, this episode offers a unique perspective on how AI can be harnessed for personal growth and enlightenment.Featuring: A conversation generated by AI (Notebook LM)Key Points:AI's potential to both control and empowerJack Dorsey's concerns about algorithmic influenceRoel Smelt's vision of AI as a tool for enlightenmentResources: For more on Roel Smelt's work, visit roelsmelt.substack.com This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Feb 19, 202524 min

Beyond the Algorithm: Reclaiming the Full Spectrum of Human Consciousness in the AI Era

The essay argues that reducing human consciousness to mere computation, as some AI research suggests, is a dangerous oversimplification. It emphasizes the crucial role of emotion, creativity, and subjective experience in human consciousness, contrasting these uniquely human qualities with the logical processes that AI systems can replicate. The author warns against granting AI the same status as humans based solely on computational abilities, highlighting the ethical implications of such a classification. By referencing various experts, the piece underscores the importance of recognizing the irreplaceable richness of human experience and the dangers of neglecting its non-computational aspects in our pursuit of technological advancement. Ultimately, it advocates for a broader, more nuanced understanding of consciousness that values the full spectrum of human qualities. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Feb 5, 202511 min

Between the Algorithm and the Abyss: A Manifesto for the Unautomated Soul

Summary: “Between the Algorithm and the Abyss: A Manifesto for the Unautomated Soul”As technology advances, we risk trading humanity’s richness for convenience, efficiency, and perfection. This essay explores the delicate balance between embracing innovation and protecting what makes us human: our irrationality, creativity, and soul.It warns of the modern “soma” we consume—tools and algorithms that numb us to longing, wonder, and struggle. Instead of letting machines dominate, we must engage with technology joyfully yet critically, ensuring it serves us without dulling our capacity for meaning and connection.Key takeaways:1. Struggle is essential: Perfection isn’t the goal—growth through friction is where meaning is born.2. Protect sacred inefficiencies: Moments like daydreaming, love, or watching clouds are invaluable and should remain untouched by optimization.3. Technology should enhance, not erase, humanity: The tools we create must leave room for creativity, mystery, and imperfection.The essay invites us to use technology as a hearth that warms and inspires us—while fiercely guarding the wild, untamed fire of our human spirit. It’s a call to rebel not by rejecting progress, but by ensuring it doesn’t automate the soul.We are not here to be perfected. We are here to be astonished. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Jan 27, 202511 min

From Woke to the Right

The text analyzes the evolution of sociopolitical ideologies, tracing a path from the 1960s counterculture movement through the rise of "woke culture" to a contemporary resurgence of libertarian thought. It explores generational perspectives on these shifts, highlighting how different age groups view authority, social responsibility, and the role of government. The author emphasizes the importance of open-mindedness and dialogue in navigating this complex ideological landscape, suggesting that progress stems from engagement and synthesis rather than adherence to rigid orthodoxy. The piece ultimately advocates for critical thinking and a willingness to consider diverse viewpoints in shaping the future of society. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Jan 13, 202520 min

The Dawn of a Transformational Leadership Era

1. IntroductionSocieties often shift dramatically when pre-existing social and political arrangements can no longer accommodate technological advances and changing values. We live in such a time. Against a backdrop of political polarization and rapid technological breakthroughs, scholars, investors, and futurists alike argue we are on the threshold of profound upheaval—one that will reshape not only our economies, but the very essence of how we govern and who leads us.George Friedman, a geopolitical forecaster and founder of the intelligence platform Stratfor, provides a structured way to understand these transformative periods by pointing to two overlapping cycles in U.S. history. Meanwhile, entrepreneurial voices such as Tony Seba and Cathie Wood forecast multiple disruptive technologies converging within the next decade. Taken together, these dual pressures—cyclical realignment and technological revolution—signal the dawn of a new era of leadership.2. George Friedman’s Two CyclesGeorge Friedman’s works, notably The Next 100 Years (2009) and The Storm Before the Calm (2020), detail two primary cycles shaping American society:1. The Socio-Economic Cycle (circa 50 years)About every half-century, the U.S. experiences structural economic shifts that redefine job markets, core industries, and class relations. Post-World War II manufacturing gave way in the 1970s–80s to a services- and finance-oriented economy, and we now stand at the edge of another reinvention in the 2020s–2030s.2. The Institutional/Political Cycle (circa 80 years)Roughly every four generations (near 80 years), the distribution of power among federal, state, and local authorities—and within the three federal branches—undergoes a major reconfiguration. From the Civil War era to the WWII era to today, we see repetitive patterns of crisis and subsequent political recalibration.Friedman contends that when these two cycles converge—around the mid-late 2020s—they trigger heightened instability as old frameworks no longer fit the emerging social, economic, and technological realities. However, this instability also gives rise to new forms of leadership capable of uniting a fractious society under a fresh ethos and vision.3. A Golden Age of Converging TechnologiesWhile Friedman’s cycles address structural realignments, they also intersect with disruptive innovations whose pace has accelerated in recent years. Analysts such as Tony Seba and Cathie Wood offer compelling evidence that multiple technologies are poised to revolutionize how we live and work—simultaneously.3.1 Tony Seba’s ForecastsIn his book Clean Disruption of Energy and Transportation, Tony Seba highlights four pivotal transformations:* Solar & Batteries: Rapidly declining costs of solar panels and battery storage are challenging the dominance of centralized, fossil-fuel-based grids.* Autonomous Electric Vehicles: Cars with fewer mechanical components and driverless capabilities could upend transportation, logistics, and even urban planning.* Precision Fermentation & Synthetic Biology: Lab-grown proteins may replace traditional livestock and reduce environmental impact, similar to how the internal combustion engine once displaced horses.* Artificial Intelligence: AI systems promise efficiency boosts, from managing power grids to automating factories, but they also raise concerns about workforce displacement.Seba argues that these disruptions typically reach a tipping point faster than legacy institutions expect. By the early 2030s, entire industries could be remade.3.2 Cathie Wood and ARK InvestCathie Wood’s ARK Invest extends the notion of convergent disruption. In its Big Ideas research reports, ARK identifies multiple innovation platforms on exponential growth trajectories, including:* AI & Robotics: Innovations in robotics and machine learning could revolutionize sectors like healthcare, retail, and manufacturing.* Genomics: Advances in gene editing and DNA sequencing may dramatically improve healthcare outcomes and usher in personalized medicine.* Blockchain: Decentralized networks promise to transform finance, property rights, and identity management.Wood and her team emphasize that each technology serves as a force multiplier to the others, hastening the overall rate of disruption. The confluence of these forces—along with the systemic shifts Friedman observes—may herald a decade of unprecedented change.4. Emerging Leadership and SocietyIf these cycles and technological waves converge in the mid-late 2020s, a fresh style of leadership may crystallize. Historically, the U.S. has demonstrated a capacity to reinvent itself when old systems fail or become irrelevant. Whether triggered by social movements, visionary entrepreneurs, or disruptive presidents, large-scale realignment has often produced a novel governance style.4.1 Traits of the New LeadershipVisionary AdaptationLeaders must see potential in emerging technologies, leveraging them to solve societal problems.Elast

Jan 8, 202511 min

The Climate Crisis Is Solved: Breaking Free from Linear Thinking to Embrace Our Exponential Future

The article argues that the climate crisis is not an insurmountable challenge but a solvable problem due to rapidly advancing technologies like renewable energy, precision fermentation, and autonomous vehicles. The author criticizes linear thinking, which fails to grasp the transformative potential of exponential growth, hindering the adoption of these solutions. By embracing exponential thinking, the author believes we can accelerate progress toward a sustainable and prosperous future. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Nov 13, 202418 min

The Here and Now: Embracing Presence for a Harmonious Future

In Aldous Huxley’s novel Island, the utopian society of Pala trains mynah birds to recite phrases like “Here and now” as gentle reminders for inhabitants to return to the present moment. These birds, flitting through the verdant landscapes, symbolize the community’s commitment to mindfulness and presence. The Palanese understand that true peace and harmony arise when individuals fully engage with the present, free from the distractions of past regrets and future anxieties.In our rapidly evolving world, characterized by technological advancements and constant connectivity, the wisdom of Pala’s mynah birds resonates more than ever. We are surrounded by devices and platforms that promise connection yet often leave us feeling more disconnected—from ourselves, from others, and from the essence of life itself. The present moment—the Here and Now—is the overlooked solution that can bridge this gap, overcoming bias and prejudice, and synthesizing our technological progress with personal development rooted in love.The Overlooked Power of the Present MomentDespite the myriad ways we can connect digitally, many of us find ourselves caught in cycles of distraction, perpetually pulled away from the present. We dwell on past mistakes or fixate on future ambitions, seldom immersing ourselves in the richness of the Here and Now. Yet, it is only in the present moment that life unfolds, that experiences are truly felt, and that authentic connections are made.Mindfulness—the practice of bringing one’s full attention to the present moment—has been lauded for its profound impact on mental health and well-being. Studies have shown that regular mindfulness practice can reduce stress, enhance emotional regulation, and improve interpersonal relationships. By grounding ourselves in the present, we become more attuned to our thoughts and feelings without becoming overwhelmed by them.Overcoming Bias and Prejudice Through PresenceBiases and prejudices are often rooted in past experiences and conditioned beliefs. They cloud our judgment and hinder our ability to see others clearly. When we operate on autopilot, these unconscious biases guide our interactions, leading to misunderstandings and divisions.By embracing the Here and Now, we create space to observe our thoughts and reactions as they arise. This awareness allows us to recognize biased thinking patterns and choose responses aligned with empathy and understanding. In Pala, the mynah birds’ calls to “Attention” serve as prompts for individuals to return to this state of mindful awareness, fostering a society where people relate to each other without the veils of prejudice.Synthesizing Technological Advancement and Human ConnectionOur era is marked by unprecedented technological growth. Artificial intelligence, personal robots, and instant access to global information hold the promise of solving complex problems and improving quality of life. However, without mindful integration, these advancements can exacerbate feelings of isolation and detachment.To create a harmonious synthesis between technology and human development, we must approach innovation with intention. This means designing and using technology in ways that enhance our capacity for presence rather than detract from it. For example, apps that promote mindfulness and well-being, virtual reality experiences that foster empathy, or communication platforms that encourage meaningful interactions can bridge the gap between technological progress and our intrinsic need for connection.Lessons from Pala’s DownfallIn Island, despite the Palanese commitment to mindfulness and communal well-being, their utopia ultimately succumbs to external forces driven by greed and exploitation. The lure of oil reserves beneath Pala’s surface attracts those who prioritize profit over people, leading to the society’s undoing.This narrative serves as a cautionary tale for our times. It highlights the vulnerability of even the most enlightened communities when confronted with unchecked greed and materialism. To prevent a similar fate, we must cultivate not only individual mindfulness but also collective ethical frameworks that prioritize the well-being of all over the interests of a few.The Universal Experience of Oneness and LoveWhen we are fully present, the boundaries that separate us from others begin to dissolve. We tap into a sense of oneness—a recognition that, at our core, we are interconnected. This realization fosters unconditional love and compassion, extending beyond personal relationships to encompass all of humanity and the natural world.Practices that anchor us in the Here and Now, such as meditation, mindful movement, or simply observing our surroundings with curiosity, open us to this profound experience. By nurturing this connection, we lay the groundwork for a society that values empathy, kindness, and mutual respect.Embracing the Here and Now in the Modern WorldThe challenges we face today—social division, environmental degradat

Nov 10, 202412 min

The Symbiotic Dance: AI, Humanity, and the Art of Conscious Collaboration

Imagine a world where artificial intelligence doesn’t merely execute tasks but understands and empathizes with human emotions and complexities. This vision is not just a staple of science fiction but a tangible frontier influenced by rapid technological advancements. Tony Seba’s concept of S-curves highlights how disruptive innovations can transform industries and societies. We are now witnessing AI’s S-curve as it begins to reshape the very fabric of our daily lives. This essay explores the intricate dance between AI and human consciousness, examining how we can harmonize technological prowess with the essence of what it means to be human.Exploring Bias and Truth-Seeking in AIAI systems, particularly those involved in decision-making processes, rely heavily on data to function. However, the data they consume is often a reflection of historical biases and societal prejudices. Yuval Noah Harari distinguishes between truth-seeking systems, which aim to uncover objective realities, and order-creating systems, which prioritize stability and control. AI has the potential to be a truth-seeking tool, but its effectiveness is contingent upon the quality and neutrality of the data it processes.Consider Tesla’s Full Self-Driving (FSD) technology as a case study. While FSD showcases remarkable advancements in autonomous navigation, it also highlights the challenges of bias in AI. The AI must interpret a myriad of real-world scenarios, some of which may not have been anticipated during its training phase. This necessitates a dynamic learning approach where AI systems continuously update their algorithms based on present-moment data, reducing reliance on potentially outdated or biased historical information. By leveraging real-time data and advanced machine learning techniques, AI can strive towards unbiased truth-seeking, enhancing its reliability and fairness.The Freedom of Consciousness and the Red Pill AnalogyIn The Matrix, the red pill symbolizes the choice to embrace truth and reality, no matter how uncomfortable, while the blue pill represents a preference for blissful ignorance. This analogy extends to our relationship with AI. Embracing the red pill means prioritizing human consciousness and ensuring that AI remains a tool that serves humanity without overshadowing our inherent cognitive and emotional complexities.Elon Musk’s vision for AI aligns with this perspective, emphasizing the importance of developing AI that complements rather than replicates human consciousness. The fear is that AI could evolve into an independent entity, potentially sidelining human agency. To prevent this, we must consciously design AI systems that enhance human capabilities without attempting to emulate or surpass our consciousness. This balance ensures that AI remains a facilitator of human potential rather than a replacement for it.Redefining Goals and Alignment of AI’s Ultimate PurposeAs AI continues to evolve, defining its ultimate purpose becomes paramount. Without clear, human-centric goals, AI might prioritize efficiency and productivity over creativity, empathy, and ethical considerations. This misalignment can lead to unintended consequences where AI-driven decisions may benefit a select few while marginalizing others.Musk’s focus on truth-seeking AI underscores the necessity of embedding ethical frameworks within AI systems. By aligning AI’s objectives with collective human well-being, we can steer its development towards fostering creativity, enhancing empathy, and supporting equitable growth. This requires a multidisciplinary approach, integrating insights from ethics, psychology, sociology, and technology to create AI that not only performs tasks efficiently but also respects and promotes human values.Conscious Collaboration: Fostering a Symbiotic RelationshipThe future lies in fostering a symbiotic relationship between AI and humanity, where both entities contribute to each other’s growth. This collaboration can be envisioned as a dance, where AI provides tools and insights that amplify human creativity and problem-solving, while humans guide AI with ethical considerations and emotional intelligence.To achieve this, continuous dialogue between technologists, ethicists, policymakers, and the public is essential. Developing transparent AI systems that allow for accountability and adaptability ensures that AI remains aligned with societal values and human needs. Moreover, investing in education and training can empower individuals to leverage AI effectively, fostering a generation that views AI as an extension of human capability rather than a threat.Conclusion: Championing Consciousness in an AI-Driven EraAs we stand on the brink of an AI-driven era, the imperative to champion human consciousness has never been more critical. By fostering a symbiotic relationship rooted in present-moment learning, unbiased data processing, and ethical alignment, we can harness AI’s potential to elevate human existence. This delicate bal

Oct 30, 202418 min

The Present Moment Is Where Bias Will Vanish

Podcast Title:The Present Moment Is Where Bias Will VanishEpisode Title:Escaping the Hallucination: How Real-Time Testing Shatters Bias and Unites UsShow Notes:In this thought-provoking episode, we dive deep into how biases shape our worldview and how the present moment holds the key to dissolving them. We begin by exploring the scientific method, breaking down why even the most rigorous processes are vulnerable to bias when relying on past data. But what happens when we anchor ourselves in real-time testing, engaging directly with the reality of the present moment? The results are nothing short of transformative.Through compelling examples—ranging from the Industrial Revolution to AI systems like ChatGPT and Tesla’s Full Self-Driving (FSD) technology—we demonstrate how constant feedback and real-time input lead to clarity and progress. We also discuss how Elon Musk’s vision for X (formerly Twitter) and Harari’s concerns about the nature of truth contrast, highlighting the importance of a “here and now” approach.Key takeaways include:• Why all recorded knowledge carries inherent bias and how the present moment is the only place where truth emerges.• How the success of the Industrial Revolution relied on scaling real-time tests, driving continual refinement.• Why AI systems must be designed to rely on present-moment data rather than biased historical information.• How Tesla’s FSD, with millions of cars gathering real-time data, may become the most successful AI solution to benefit humanity.• The potential of AI feedback loops and real-time inputs, illustrated by Tesla’s Optimus robot and ChatGPT’s interaction with users.Join us as we explore a vision for the future where AI and human systems evolve together in real-time, solving bias and fostering greater unity. In a world teetering on the edge of division, this episode brings hope that through present-moment awareness, we can dissolve the hallucinations of the past and build a clearer, more connected future.Timestamps:• [00:00] Introduction to the concept of bias and the present moment• [03:30] The scientific method: How bias creeps in• [06:45] Real-time testing: The Industrial Revolution as a model of progress• [10:00] The role of AI and why present-moment data is crucial• [14:20] Tesla’s Full Self-Driving and real-time data input• [18:00] Elon Musk, Harari, and the pursuit of truth in the here and now• [22:15] How AI feedback loops help shatter bias in real time• [25:30] Optimus and the future of AI-human collaboration• [29:00] Conclusion: How to dissolve bias and build unity through present-moment awarenessLinks and Resources:• Read the full essay on Substack: The Present Moment Is Where Bias Will Vanish• Learn more about Tesla’s Full Self-Driving technology• Explore Yuval Noah Harari’s latest book Nexus• Follow the discussion on truth-seeking systems and AI ethics• Connect with us on social media for more updatesCall to Action:If you enjoyed this episode, don’t forget to subscribe to Disrupt Consciousness and leave a review! For more insights into the intersection of technology, human consciousness, and AI, visit my Substack for essays, articles, and podcast episodes that dive deeper into these topics.These show notes are designed to introduce listeners to the main themes of the essay, offering a digestible preview of the discussion and encouraging further exploration. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Oct 24, 202413 min

The Nature of Creativity: A Human Trait or an Emerging AI Capability?

This article examines the concept of creativity and its relationship to both humans and artificial intelligence (AI). The author, Roel Smelt, argues that while AI can generate seemingly creative outputs like art and music, it lacks the emotional depth, subjective experience, and "Eureka" moments that characterize true human creativity. He emphasizes that human consciousness, with its capacity for intuition, personal interpretation, and emotional expression, plays a crucial role in the creative process. Smelt concludes that, despite AI's advancements, the spark of inspiration and the unique qualities of human creativity remain distinct and irreplaceable. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Oct 19, 20249 min

The Ultimate Goals for Humanity and AI: Navigating Truth, Consciousness, and Order

The EncounterIn a quiet café nestled in a bustling city, a young AI researcher named Alex sat across from Dr. Maria Sanchez, a seasoned philosopher. They were deep in conversation about the future of artificial intelligence. Alex, eyes gleaming with excitement, said, "Imagine an AI that can solve all our problems—a true oracle of truth."Dr. Sanchez sipped her coffee thoughtfully. "But what would be its ultimate goal?" she asked. "And more importantly, what does that mean for us as humans?"Alex paused. "Isn't the pursuit of truth enough?"Dr. Sanchez leaned forward. "Truth is vital, but without understanding consciousness and the need for order, we might create something that surpasses us in ways we can't control. We could become spectators in our own world."Their conversation highlights a profound dilemma: As we design increasingly intelligent AI, how do we define its ultimate purpose, and what implications does that have for humanity?Exploring the DilemmaThis essay delves into this dilemma by examining the challenges of setting ultimate goals for both humans and AI. Drawing on ideas from Yuval Noah Harari's Nexus, the perspectives of visionaries like Elon Musk, and personal reflections, we explore the interplay between truth-seeking, the importance of present-moment data, and the fundamental role of human consciousness.1. Truth-Seeking vs. Order: Two Models of PurposeHarari identifies two primary models that societies and systems often pursue:* Truth-Seeking (Science and Innovation): This model prioritizes the relentless pursuit of knowledge and understanding. It embodies the scientific method, where hypotheses are tested, and theories evolve based on empirical evidence.* Order-Seeking (Systems of Control): This model values stability and predictability, sometimes at the expense of truth. Totalitarian regimes exemplify this, suppressing information to maintain control.Elon Musk champions the truth-seeking model, especially in AI development. He argues that AI should be designed to uncover truths about the universe, leading to advancements that benefit humanity.Personal Perspective: I align with the truth-seeking model, believing that the pursuit of knowledge drives progress. However, I recognize that without balance, pure truth-seeking could disrupt societal order if new truths challenge established norms.2. The Imperative of Present-Moment DataA critical aspect of truth-seeking is reliance on present-moment data. Scientific endeavors must be grounded in current observations to remain relevant and unbiased.Overreliance on historical data can introduce biases, as past information may not accurately reflect current realities. This is particularly crucial in AI systems, where algorithms trained on outdated data can perpetuate past prejudices and errors.Personal Perspective: I believe that both science and AI must prioritize real-time data. By continuously integrating present-moment information, AI systems can adapt to evolving environments and make decisions that reflect current contexts.Moreover, incorporating human perception—including emotions and subjective experiences—is essential. This layer of data differentiates humans from AI and ensures technology remains aligned with human values.3. Self-Correcting Systems and Ultimate GoalsElon Musk emphasizes that AI systems must be capable of self-correction. In science, self-correction is achieved through iterative testing and adaptation based on new evidence.However, a self-correcting AI raises complex questions:* Can AI redefine its ultimate goals without human guidance?* How do we ensure that self-correction aligns with human ethics and values?Personal Perspective: While self-correction is vital for AI to remain effective, I contend that AI should not autonomously alter its fundamental objectives. Human oversight is crucial to ensure that AI development remains aligned with our collective well-being.4. Democracy and the Need for New Ordering PrinciplesHarari notes that democracy, as a system, is facing significant challenges in the modern era. The rise of misinformation, global crises, and technological disruptions strain democratic institutions.AI could play a dual role:* Enhancing Democracy: By providing accurate information and facilitating informed decision-making, AI can strengthen democratic processes.* Threatening Democracy: Conversely, AI could be used to manipulate information, surveil populations, and consolidate power.Personal Perspective: I advocate for leveraging AI to support and enhance democratic values. Transparency in AI algorithms and inclusive governance can help mitigate risks and promote societal well-being.5. Consciousness as Fundamental to Human IdentityThe conversation between Alex and Dr. Sanchez touches on a critical aspect: consciousness.Personal Perspective: I firmly believe that consciousness is fundamental to being human. If we neglect this, we risk creating AI that operates independently of human values, potentially surpassing

Oct 15, 202410 min

Consciousness Unveiled: Fundamental Reality or Emergent Phenomenon?

This is a free preview of a paid episode. To hear more, visit roelsmelt.substack.comEpisode Summary:Join us as we dive into one of the most profound questions of our time: is consciousness a fundamental reality unique to living beings, or merely an emergent property of mind and matter? Through an engaging narrative featuring Sophia and an AI named Elias, we explore the philosophical clash between human experience and artificial intelligence. We discuss perspectives from quantum pioneers like Max Planck and Erwin Schrödinger, as well as modern thinkers like Daniel Dennett and Giulio Tononi. Learn how emotions play a crucial role in human consciousness, the implications of AI potentially becoming conscious, and what insights Buddhist philosophy offers on the true nature of awareness.Key Highlights:• Consciousness: Fundamental vs. Emergent debate.• Emotional depth as the unique aspect of human experience.• The potential ethical implications of conscious AI.Call to Action: Share your thoughts and join the conversation—could AI truly become conscious, or are we fundamentally different?

Oct 11, 202410 min

The Dawn of the Solar Age

This is a free preview of a paid episode. To hear more, visit roelsmelt.substack.comElon Musk, ever the visionary, recently linked the future of energy to humanity's cosmic ambitions, stating that "essentially all energy generation will be solar" once you grasp the Kardashev Scale. This scale, proposed by Soviet astronomer Nikolai Kardashev in 1964, measures a civilization's technological advancement based on its ability to harness ene…

Oct 7, 20249 min

Amsterdam 2035: Navigating the FSD Future

The iconic canals of Amsterdam, once teeming with a cacophony of bicycles, trams, and the occasional sputtering car, paint a drastically different picture in 2035. Full Self-Driving (FSD) technology, legalized a decade prior, has ushered in an era of unprecedented transformation, reshaping the city's mobility landscape and redefining urban living.From Congestion to Connectivity: Amsterdam's Transportation EvolutionPrior to the FSD revolution, Amsterdam's streets were a microcosm of the challenges facing many modern cities. The city's love affair with bicycles, while environmentally friendly, resulted in overcrowded bike lanes and frequent accidents [1]. The tram network, though extensive, struggled to keep pace with the growing population, leading to delays and overcrowding [2]. Taxis, heavily regulated and often expensive, catered primarily to tourists and affluent residents [3]. Parking, a perennial headache, consumed valuable urban space and contributed to congestion [4]. Furthermore, the city's ambitious environmental goals clashed with the reality of emissions from conventional vehicles [5].FSD: The Catalyst for ChangeThe advent of FSD technology presented a tantalizing solution to Amsterdam's transportation woes. The promise of safer, more efficient, and more sustainable mobility captured the imagination of policymakers and citizens alike.FSD, the pinnacle of autonomous vehicle technology, empowers vehicles to navigate and operate without human intervention in most driving scenarios [6]. Advancements in artificial intelligence, sensor technology, and computing power have brought FSD closer to reality, although regulatory hurdles and public concerns about safety persist [7].Tony Seba, a renowned futurist, envisions a future where FSD vehicles dominate the roads, leading to a dramatic reduction in car ownership, traffic accidents, and transportation costs [8]. He predicts that FSD will usher in an era of "transport as a service," where on-demand autonomous vehicles provide affordable and convenient mobility for all. Crucially, Seba argues that the cost per mile for an autonomous electric vehicle (AEV) could be as low as $0.05, compared to $0.50 - $1.00 per mile for a conventional car [8]. This dramatic cost reduction, coupled with the elimination of the need for drivers, could make robotaxi services far more affordable than traditional taxis or ride-hailing services.Navigating the Transition: Uber's Legacy and the Amsterdam Mobility CooperativeA decade ago, Uber's disruptive entry into the transportation market forced the taxi industry to adapt or perish. In Amsterdam, where taxi drivers pay a substantial fee for a license, concerns arose about the potential impact of FSD on their livelihoods [3].To ensure a smooth transition and foster collaboration, Amsterdam could establish an "Amsterdam Mobility Cooperative," a platform bringing together stakeholders from the taxi industry, FSD technology companies, and the city government. The cooperative could manage a shared fleet of FSD vehicles, provide training and employment opportunities for taxi drivers, set fair pricing models, and invest in infrastructure to support FSD technology.Amsterdam 2035: A Day in the Life of the de Vries FamilyThe morning sun bathes the canals in a soft light as the de Vries family awakens in their canal-side apartment. Ten years ago, the idea of living car-free in the city center seemed daunting. But today, thanks to FSD, their lives are seamlessly intertwined with the city's transformed mobility landscape."Good morning, family!" Anna de Vries greets her husband, Pieter, and their two children, Max and Emma. "Who's ready for a canal-side breakfast?""Me! Me!" Max and Emma chorus, their eyes sparkling with anticipation.After a leisurely breakfast, it's time for school and work. "Pod's here!" Max announces, checking his smartwatch.The family steps outside to find a sleek, shared autonomous vehicle waiting patiently at the curb. The "pod," as they affectionately call it, has become their primary mode of transport. It's summoned effortlessly through an app, whisking them away to their destinations safely and efficiently."Have a great day at school, kids!" Anna calls out as the pod glides silently down the street.Pieter, a consultant, opts for a robotaxi for his morning meeting. He appreciates the privacy and comfort of the solo ride, allowing him to prepare for his presentation en route. The cost, a fraction of what a traditional taxi would charge, makes it an easy choice.Anna, an architect, chooses to cycle to her office, enjoying the invigorating breeze and the scenic route along the canal. The once-crowded bike lanes are now pleasantly spacious, thanks to the reduced number of private cars and the intelligent traffic management system that prioritizes cyclists and pedestrians.Later that evening, the family reunites for dinner. "How was your day?" Anna asks the children."Amazing!" Emma exclaims. "We had a field trip to the Nemo Sci

Oct 2, 20248 min

The German Car Industry in Crisis: Unveiling the Core Dilemma and a Radical Path Forward

The conversation about this article has been created with Notebook LM by Google. SummaryThe German car industry is on the brink of collapse. Massive layoffs loom, technological advancements have left it behind, and incremental changes are no longer enough. This article delves into the heart of the crisis, unveils the sole dilemma threatening the industry’s survival, and presents a bold, almost unthinkable solution: a radical partnership with Tesla. We explore how this alliance could be realized, who should lead the charge, and confront the emotional barriers standing in the way. Finally, we contrast the bleak future of continued stagnation with the promising horizon that this daring move could bring.Article Structure* The Current Crisis and Its Dire Consequences - An in-depth look at the challenges facing the German car industry, including technological lag, impending layoffs, and economic ramifications.* Identifying the Core Dilemma - Analysis of the fundamental problem: the failure to embrace the convergence of electric vehicles (EVs) and full self-driving (FSD) technology.* The Radical, Almost Fantastical Solution - Introducing the bold idea of partnering with Tesla as the means to overcome the industry’s existential crisis.* Making the Crazy Solution Feasible - Outlining practical steps to turn the radical idea into reality, including technology licensing, infrastructure development, and cultural transformation.* Developing the Project Plan and Leadership - Crafting a detailed roadmap for implementation, identifying key stakeholders, and determining who should spearhead the initiative.* Confronting Emotional Barriers and Projecting Future Outcomes - Exploring the emotional resistance within the industry, and contrasting the bleak future of inaction with the optimistic scenario resulting from embracing the radical solution.1. The Current Situation and Its Negative RamificationsThe German car industry, a cornerstone of the nation's economy and a symbol of engineering prowess, is facing an unprecedented crisis. Iconic manufacturers such as Volkswagen (VW), BMW, and Mercedes-Benz are grappling with severe challenges that threaten their very existence. The industry is besieged by rapid advancements in electric vehicles (EVs), autonomous driving technologies, and shifting consumer preferences—all areas where it has fallen significantly behind competitors like Tesla and emerging startups.Massive Layoffs and Economic FalloutThe repercussions are already manifesting in dire projections of massive layoffs. Studies suggest that up to 400,000 jobs in Germany could be lost by 2030 due to the transition to EVs and the competitive pressures from abroad. Volkswagen has hinted at cutting tens of thousands of jobs, and other manufacturers are likely to follow suit. The potential for widespread unemployment poses a significant threat to Germany's economic stability and social fabric.Government Bailouts and Regulatory PressuresIn response to the looming crisis, discussions of government bailouts have intensified. The German government is under immense pressure to intervene to prevent the collapse of one of its most critical industries. Simultaneously, automakers are lobbying to delay the European Union's 2030 deadline to ban the sale of new internal combustion engine (ICE) vehicles, arguing for more time to transition. However, delaying the inevitable only exacerbates the industry's vulnerabilities.The Risk of IncrementalismThe industry’s current approach—incremental changes and half-hearted efforts to electrify—is insufficient. Continuing down this path will likely lead to bankruptcy and further job losses. The German car industry risks obsolescence as global competitors accelerate ahead in technology and innovation. Without a radical shift, the negative ramifications will extend beyond the industry itself, affecting the entire German economy and its position in the global market.2. Analyzing the Core Problem: The Sole Dilemma Behind the CrisisAt the heart of the crisis lies a singular, pervasive dilemma: the German car industry's failure to embrace the true nature of technological disruption, particularly the convergence of electric vehicles with full self-driving (FSD) capabilities.Complacency and Overreliance on Legacy TechnologiesFor decades, German automakers have been synonymous with superior engineering, especially in ICE vehicles. This success bred complacency, leading to an overreliance on legacy technologies and a resistance to change. The industry underestimated the pace at which EVs and autonomous driving technologies would develop and become commercially viable.Misunderstanding the DisruptionThe German car industry viewed the shift to electric vehicles as a gradual evolution rather than a rapid disruption. This miscalculation extended to autonomous driving technologies. While efforts were made to develop EV models, they were often half-hearted and lacked the innovation seen in competitors. The industry failed

Sep 23, 202410 min

A conversation about the Power Paradox

Podcast Title: Disrupt ConsciousnessEpisode Title: The Power Paradox: Unlocking the Netherlands’ Energy PotentialGuest: Roel Smelt, Philosopher of Technology and HumanityEpisode Overview:In this episode of Disrupt Consciousness, host Jacks sits down with Roel Smelt to discuss his latest article, The Power Paradox: How the Netherlands Is Missing Out on Energy Abundance. Together, they explore the challenges facing the Dutch energy grid, outdated policies that hinder battery adoption, and the potential for battery technology to unlock energy abundance, prosperity, and societal transformation.Through comparison with Germany’s successful adoption of over one million home batteries, Rule explains how the Netherlands can shift from a mindset of energy scarcity to one of abundance. This episode tackles how policy reforms, technological innovation, and embracing energy storage can lead to greater freedom, prosperity, and human potential.Key Takeaways:• The Power Paradox: Despite the Netherlands’ potential for renewable energy, outdated policies like net metering prevent widespread battery adoption, limiting the country’s ability to manage grid congestion effectively.• Germany’s Success with Home Batteries: By reducing feed-in tariffs and incentivizing energy storage, Germany has improved grid stability and supported the growth of renewable energy. The Netherlands could follow a similar path by encouraging self-consumption and battery adoption.• From Scarcity to Abundance: Rule Smelt argues that energy abundance through batteries not only addresses grid challenges but unlocks societal prosperity and freedom, allowing people to focus on innovation and personal growth.• Policy and Technological Solutions: The conversation covers key strategies, such as transitioning from net metering to dynamic pricing, providing subsidies for batteries, and investing in domestic battery production to stimulate the economy.Quotes from the Episode:• “Energy abundance is not just about powering homes—it’s about unlocking human potential and transforming society.” – Roel Smelt• “By embracing battery technology and shifting from scarcity to abundance, we can turn challenges into transformative opportunities for prosperity.” – Roel SmeltCall to Action:• Subscribe to Disrupt Consciousness for more thought-provoking conversations about the future of technology and humanity.• Share this episode with anyone interested in the energy transition and disruptive innovations.• Advocate for energy storage and battery technology by educating yourself on the policy changes needed for a sustainable future.Disclaimer:The conclusions and insights in this podcast are from Roel Smelt. AI tools were used for research, writing the article, and synthesizing the podcast to assist in the creation process.Related Links:• Roel Smelt’s Article: The Power Paradox: How the Netherlands Is Missing Out on Energy Abundance• Germany’s Energy Storage Policies• Netherlands Renewable Energy Report This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Sep 20, 20249 min

Humanoid Robots and the Path to a New Future: Netflix or Enlightenment?

Host: Roel Smelt, Philosopher of Technology and HumanityEpisode Title: Humanoid Robots Are Coming: Will We Awaken Our Consciousness or Numb Ourselves into a Digital Dystopia?Overview:In this episode, Roel explores the imminent disruption humanoid robots will bring, reshaping our concept of labor, freedom, and prosperity. Drawing parallels to 1984 and Wall-E, Roel poses a crucial question: Will we choose passive entertainment, or will we use this newfound time to evolve our consciousness and seek deeper meaning?Key Topics:• Tony Seba’s “We Are the Horses” and the inevitable disruption of labor.• The cautionary tales of Orwell’s 1984 and Pixar’s Wall-E.• How robots can free us to pursue spiritual growth and enlightenment.• The crossroads humanity faces between digital numbing and awakening consciousness.Takeaway:The future hinges on how we use the time and freedom afforded by technology—will we be passive consumers or conscious seekers?Produced with AI collaboration. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Sep 12, 20244 min

Embracing Disruption: How Tony Seba’s Vision Is Shaping a Prosperous and Enlightened Future

Embracing Disruption: How Tony Seba’s Vision Is Shaping a Prosperous and Enlightened Future In this episode of 'Disruption Unleashed,' host Roel explores the transformative power of exponential technologies and how they are reshaping our world. Roel delves into the three phases of technological disruption, inspired by Tony Seba’s visionary ideas, and explains how these disruptions are ushering in a new era of prosperity, freedom, and enlightenment for humanity. The Three Phases of Disruption The Ignorance Phase: Fear, Resistance, and Linear Thinking The Embracing Phase: Innovators and Entrepreneurs Lead the Charge The Transformation Phase: A Better, More Enlightened World This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit roelsmelt.substack.com/subscribe

Sep 11, 20248 min