PLAY PODCASTS
AI & The Future of Humanity:  Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews

AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews

145 episodes — Page 2 of 3

AI & The Pathway to Flow with Neuroscientist, Fmr. Dancer DR. JULIA CHRISTENSEN

“So, syncopation is now the big thing. It will induce people to groove and to like your music more. So let's have a lot of syncopation inside your music and you'll sell a lot. By chasing superficial beauty, which is what AI gives us at the moment, it aims for perfect outcomes. Not that anything these models produce is perfect, because how do you evaluate perfection? But they are based on the data that most people want to see again. That's extremely important to bear in mind. When you say 'cluttered mind,' it's actually also a cluttered brain in terms of the neurotransmitters out and about. As we strive for that perfect coding and external beauty, our brain releases dopamine signals. Dopamine is good; it's a learning signal to the brain, but we need to know how to use it. Constantly swiping our phone and getting this beauty into our brain via our eyes or via the syncopations in the music teaches our mind to seek that all the time because that's a dopamine signal. It's a learning signal. So, striving after these shapes and sound cues repeatedly clutters your brain. That's why your mind is full.”Dr. Julia F. Christensen is a Danish neuroscientist and former dancer currently working as a senior scientist at the Max Planck Institute for Empirical Aesthetics in Germany. She studied psychology, human evolution, and neuroscience in France, Spain and the UK. For her postdoctoral training, she worked in international, interdisciplinary research labs at University College London, City, University London and the Warburg Institute, London and was awarded a postdoctoral Newton International Fellowship by the British Academy. Her new book The Pathway to Flow is about the science of flow, why our brain needs it and how to create the right habits in our brain to get it.https://www.linkedin.com/in/dr-julia-f-christensen-36539a144https://www.instagram.com/dr.julia.f.christensen?igsh=cHZkODgxczJqZmxlwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Oct 28, 202416 min

AI, Technological Progress & the Growth Dilemma w/ Economist DANIEL SUSSKIND - Highlights

“The running theme in all of my work has been technology. The first book that I co-authored with my dad was published in 2015. The second book I wrote was A World Without Work: Technology, Automation, and How We Should Respond, published in 2020, just before the pandemic began. My new book Growth: A Reckoning is about growth, but also technological progress, because what drives growth is technological progress—we have a choice to change the nature of growth, and the same is true of our technological progress. To reach a dynamic economy capable of generating ever more ideas about the world, we need to use the technologies we have to generate new ideas about the world. One of the technologies I've been particularly excited by was AlphaFold, developed by DeepMind to solve protein folding problems in biology. Essentially, understanding the 3D shape of proteins is important for understanding disease and designing effective treatment, but incredibly difficult to figure out, and Alpha fold has solved this problem by providing the 3D structures of millions of proteins. As the only economist in The Institute for Ethics in AI, I’ve always found the moral, ethical side of technology interesting. I often get asked, “What can machines do, and what can they not do?” But I think one of the most troubling, but also one of the most fascinating things about technology is it is forcing us to ask the question “What does it really mean to be human? What is humanity?” For a long time, many people thought the core of what it means to be a human being is to be a creative thing. But with the arrival of generative AI in the last few years, I think that that has been really called into question. These AI systems are particularly good at creative tasks—coming up with original, novel text, images, and video. In fact, I actually use these AI systems to generate bedtime stories with my children—getting the kids to craft a good prompt is quite a fun, intellectually demanding exercise, and these technologies now give my children a storytelling capability that would have been unimaginable only a few years ago. So, one of the interesting philosophical consequences of technologies is that it's challenging some of the complacency and deep-rooted assumptions about what it really means to be a human being.”Daniel Susskind is a Research Professor in Economics at King's College London and a Senior Research Associate at the Institute for Ethics in AI at Oxford University. He is the author of A World without Work and co-author of the bestselling The Future of the Professions. Previously, he worked in various roles in the British Government - in the Prime Minister’s Strategy Unit, in the Policy Unit in 10 Downing Street, and in the Cabinet Office. His latest book is Growth: A Reckoning.www.danielsusskind.comwww.penguin.co.uk/books/446381/growth-by-susskind-daniel/9780241542309www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Oct 4, 202413 min

Growth: A Reckoning with Economist DANIEL SUSSKIND

How can we look beyond GDP and develop new metrics that balance growth with human flourishing and environmental well-being? How can we be more engaged global citizens? In this age of AI, what does it really mean to be human? And how are our technologies transforming us?Daniel Susskind is a Research Professor in Economics at King's College London and a Senior Research Associate at the Institute for Ethics in AI at Oxford University. He is the author of A World without Work and co-author of the bestselling The Future of the Professions. Previously, he worked in various roles in the British Government - in the Prime Minister’s Strategy Unit, in the Policy Unit in 10 Downing Street, and in the Cabinet Office. His latest book is Growth: A Reckoning.“The running theme in all of my work has been technology. The first book that I co-authored with my dad was published in 2015. The second book I wrote was A World Without Work: Technology, Automation, and How We Should Respond, published in 2020, just before the pandemic began. My new book Growth: A Reckoning is about growth, but also technological progress, because what drives growth is technological progress—we have a choice to change the nature of growth, and the same is true of our technological progress. To reach a dynamic economy capable of generating ever more ideas about the world, we need to use the technologies we have to generate new ideas about the world. One of the technologies I've been particularly excited by was AlphaFold, developed by DeepMind to solve protein folding problems in biology. Essentially, understanding the 3D shape of proteins is important for understanding disease and designing effective treatment, but incredibly difficult to figure out, and Alpha fold has solved this problem by providing the 3D structures of millions of proteins. As the only economist in The Institute for Ethics in AI, I’ve always found the moral, ethical side of technology interesting. I often get asked, “What can machines do, and what can they not do?” But I think one of the most troubling, but also one of the most fascinating things about technology is it is forcing us to ask the question “What does it really mean to be human? What is humanity?” For a long time, many people thought the core of what it means to be a human being is to be a creative thing. But with the arrival of generative AI in the last few years, I think that that has been really called into question. These AI systems are particularly good at creative tasks—coming up with original, novel text, images, and video. In fact, I actually use these AI systems to generate bedtime stories with my children—getting the kids to craft a good prompt is quite a fun, intellectually demanding exercise, and these technologies now give my children a storytelling capability that would have been unimaginable only a few years ago. So, one of the interesting philosophical consequences of technologies is that it's challenging some of the complacency and deep-rooted assumptions about what it really means to be a human being.”www.danielsusskind.comwww.penguin.co.uk/books/446381/growth-by-susskind-daniel/9780241542309www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Oct 4, 202456 min

The Human Smart City: Balancing Ecology & Economy with CARLOS MORENO - Highlights

“This is the difference between a technological smart city and a real human smart city towards a 15-minute city as the expression of a human-centered urban approach. This is our challenge for the next decades and our target, to humanize our cities. The Olympic Games in Paris have shown the world that it is possible to recreate, to regenerate a really vibrant city with harmonious life between districts, different places, the role of the Seine River as nature in the presence of a lot of people for having more real livability and not an illusory computer life driven by social networks.”Carlos Moreno was born in Colombia in 1959 and moved to France at the age of 20. He is known for his influential "15-Minute City" concept, embraced by Paris Mayor Anne Hidalgo and leading cities around the world. Scientific Director of the "Entrepreneurship - Territory - Innovation" Chair at the Paris Sorbonne Business School, he is an international expert of the Human Smart City, and a Knight of the French Legion of Honour. He is recipient of the Obel Award and the UN-Habitat Scroll of Honour. His latest book is The 15-Minute City: A Solution to Saving Our Time and Our Planet.https://www.moreno-web.net/https://www.wiley.com/en-us/The+15-Minute+City%3A+A+Solution+to+Saving+Our+Time+and+Our+Planet-p-9781394228140www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Oct 3, 202414 min

The 15-Minute City: A Solution to Saving Our Time & Our Planet with CARLOS MORENO

How can the 15-minute city model revolutionize urban living, enhance wellbeing, and reduce our carbon footprint? Online shopping is turning cities into ghost towns. We can now buy anything anywhere anytime. How can we learn to stop scrolling and start strolling and create more livable, sustainable communities we are happy to call home.Carlos Moreno was born in Colombia in 1959 and moved to France at the age of 20. He is known for his influential "15-Minute City" concept, embraced by Paris Mayor Anne Hidalgo and leading cities around the world. Scientific Director of the "Entrepreneurship - Territory - Innovation" Chair at the Paris Sorbonne Business School, he is an international expert of the Human Smart City, and a Knight of the French Legion of Honour. He is recipient of the Obel Award and the UN-Habitat Scroll of Honour. His latest book is The 15-Minute City: A Solution to Saving Our Time and Our Planet.“This is the difference between a technological smart city and a real human smart city towards a 15-minute city as the expression of a human-centered urban approach. This is our challenge for the next decades and our target, to humanize our cities. The Olympic Games in Paris have shown the world that it is possible to recreate, to regenerate a really vibrant city with harmonious life between districts, different places, the role of the Seine River as nature in the presence of a lot of people for having more real livability and not an illusory computer life driven by social networks.”https://www.moreno-web.net/https://www.wiley.com/en-us/The+15-Minute+City%3A+A+Solution+to+Saving+Our+Time+and+Our+Planet-p-9781394228140www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Oct 3, 202438 min

Neuroscience, AI & The Future of Humanity - DR. BEN SHOFTY - Highlights

“I'm one of the people who believe that anything that we as human beings can imagine will eventually happen. So, if somebody has raised the question possibility of having brain implants that augment the brain and generate additional functions, I feel like it will eventually happen. There are a lot of private companies, like Elon Musk's Neuralink and others, that are busy designing these interfaces and planning these devices. Of course, nothing is available or even close to completion right now. The next step, of course, would be to modulate them. Just like any other thing in medicine, it will start or has already started with pathological states which we've talked about and people looking for potential interventions through TMS (transcranial magnetic stimulation). It doesn't necessarily have to be invasive, but of course the next step, especially when we're talking about the brain is to intervene and generate additional functions or to improve the way the brain functions. Many people are working on trying to generate memory augmentation, navigation augmentations, and a lot of other functions. I assume eventually it will reach a point where we'll be able to pick and choose what we want to augment about our own brains.  I assume that the technology will be there eventually. And this is something that will be a part of the natural evolution of the human race.”Dr. Ben Shofty is a functional neurosurgeon affiliated with the University of Utah. He graduated from the Tel-Aviv University Faculty of Medicine, received his PhD in neurosurgical training from the Israeli Institute of Technology, and completed his training at the Tel Aviv Medical Center and Baylor University. He was also an Israeli national rugby player. His practice specializes in neuromodulation and exploring treatments for disorders such as OCD, depression, and epilepsy, among others, while also seeking to understand the science behind creativity, mind-wandering, and the many complexities of the brain.https://healthcare.utah.edu/find-a-doctor/ben-shoftyhttps://academic.oup.com/brain/advance-article/doi/10.1093/brain/awae199/7695856www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Sep 27, 202417 min

The Neuroscience of Creativity with DR. BEN SHOFTY

Where do creative thoughts come from? How can we harness our stream of consciousness and spontaneity to express ourselves? How are mind-wandering, meditation, and the arts good for our creativity and physical and mental well-being?Dr. Ben Shofty is a functional neurosurgeon affiliated with the University of Utah. He graduated from the Tel-Aviv University Faculty of Medicine, received his PhD in neurosurgical training from the Israeli Institute of Technology, and completed his training at the Tel Aviv Medical Center and Baylor University. He was also an Israeli national rugby player. His practice specializes in neuromodulation and exploring treatments for disorders such as OCD, depression, and epilepsy, among others, while also seeking to understand the science behind creativity, mind-wandering, and the many complexities of the brain.“I'm one of the people who believe that anything that we as human beings can imagine will eventually happen. So, if somebody has raised the question possibility of having brain implants that augment the brain and generate additional functions, I feel like it will eventually happen. There are a lot of private companies, like Elon Musk's Neuralink and others, that are busy designing these interfaces and planning these devices. Of course, nothing is available or even close to completion right now. The next step, of course, would be to modulate them. Just like any other thing in medicine, it will start or has already started with pathological states which we've talked about and people looking for potential interventions through TMS (transcranial magnetic stimulation). It doesn't necessarily have to be invasive, but of course the next step, especially when we're talking about the brain is to intervene and generate additional functions or to improve the way the brain functions. Many people are working on trying to generate memory augmentation, navigation augmentations, and a lot of other functions. I assume eventually it will reach a point where we'll be able to pick and choose what we want to augment about our own brains.  I assume that the technology will be there eventually. And this is something that will be a part of the natural evolution of the human race.”https://healthcare.utah.edu/find-a-doctor/ben-shoftyhttps://academic.oup.com/brain/advance-article/doi/10.1093/brain/awae199/7695856www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Sep 27, 202449 min

What is good design? How AI is Shaping OurWorld? - SCOTT DOORLEY & CARISSA CARTER - Co-authors of Assembling Tomorrow - Highlights

“The way we understand the world and how the world actually works is just not mapped perfectly. That kind of leads to problems because we don't know exactly what we're doing in the world. We can't see all the repercussions of the things we create until later on. One silver lining about the technologies we're creating is that technologies like AI could be used to help us with this issue, with the fact that our mental models aren't exactly in line with how the world works. AI is actually very good at predicting and modeling outcomes. It could be used to understand climate change better so that we're able to understand it in a way that allows us to act. It could also help us predict the impacts of the things that we're making. So there's a bit of a silver lining in here, even though it can feel scary to be in a situation where your mental model and how the world works are not in line.”“I worry that AI is changing my thoughts and can control my thoughts, and that used to sound really far-fetched and now seems sort of middle of the road. I guarantee in a year's time that will sound like a very normal concern. Social listening is very sophisticated. All of the data in the websites that we visit, the data trails that we leave out in the world, are tracking us—our locations, our behaviors, and our habits such that there are many sites out there that can predict exactly what we're thinking and feeling and feed us advertising content or things that aren't even advertising content that can change what our next behaviors are. I think that's getting more and more sophisticated. We have already seen our political elections affected by mass attacks on our social media. When that comes down to our individual agency and behavior, I think that's something we do need to be concerned about. The way that we as individuals can combat it is to be aware that it's happening. Really start to notice the unnoticed, and I still feel optimistic amongst this concern.”Scott Doorley is the Creative Director at Stanford's d. school and co author of Make Space. He teaches design communication and his work has been featured in museums and architecture and urbanism and the New York Times. Carissa Carteris the Academic Director at Stanford's d. schooland author of The Secret Language of Maps. She teaches courses on emerging technologies and data visualization and received Fast Company and Core 77 awards for her work on designing with machine learning and blockchain.  Together, they co authored Assembling Tomorrow: A Guide to Designing a Thriving Future.www.scottdoorley.comwww.snowflyzone.comhttps://dschool.stanford.edu/www.penguinrandomhouse.com/books/623529/assembling-tomorrow-by-scott-doorley-carissa-carter-and-stanford-dschool-illustrations-by-armando-veve/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Sep 21, 202421 min

Can Design Save the World? - SCOTT DOORLEY & CARISSA CARTER - Co-authors of Assembling Tomorrow - Directors of Stanford’s d.School

How can we design and adapt for the uncertainties of the 21st century? How do emotions shape our decisions and the way we design the world around us?Scott Doorley is the Creative Director at Stanford's d. school and co author of Make Space. He teaches design communication and his work has been featured in museums and architecture and urbanism and the New York Times. Carissa Carteris the Academic Director at Stanford's d. schooland author of The Secret Language of Maps. She teaches courses on emerging technologies and data visualization and received Fast Company and Core 77 awards for her work on designing with machine learning and blockchain.  Together, they co authored Assembling Tomorrow: A Guide to Designing a Thriving Future.“The way we understand the world and how the world actually works is just not mapped perfectly. That kind of leads to problems because we don't know exactly what we're doing in the world. We can't see all the repercussions of the things we create until later on. One silver lining about the technologies we're creating is that technologies like AI could be used to help us with this issue, with the fact that our mental models aren't exactly in line with how the world works. AI is actually very good at predicting and modeling outcomes. It could be used to understand climate change better so that we're able to understand it in a way that allows us to act. It could also help us predict the impacts of the things that we're making. So there's a bit of a silver lining in here, even though it can feel scary to be in a situation where your mental model and how the world works are not in line.”“I worry that AI is changing my thoughts and can control my thoughts, and that used to sound really far-fetched and now seems sort of middle of the road. I guarantee in a year's time that will sound like a very normal concern. Social listening is very sophisticated. All of the data in the websites that we visit, the data trails that we leave out in the world, are tracking us—our locations, our behaviors, and our habits such that there are many sites out there that can predict exactly what we're thinking and feeling and feed us advertising content or things that aren't even advertising content that can change what our next behaviors are. I think that's getting more and more sophisticated. We have already seen our political elections affected by mass attacks on our social media. When that comes down to our individual agency and behavior, I think that's something we do need to be concerned about. The way that we as individuals can combat it is to be aware that it's happening. Really start to notice the unnoticed, and I still feel optimistic amongst this concern.”www.scottdoorley.comwww.snowflyzone.comhttps://dschool.stanford.edu/www.penguinrandomhouse.com/books/623529/assembling-tomorrow-by-scott-doorley-carissa-carter-and-stanford-dschool-illustrations-by-armando-veve/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcastImage credit: Patrick Beaudouin

Sep 21, 202457 min

AI, Tech & The Future of Museums - STEPHEN REILY, Founding Director of Remuseum on Transforming Cultural Spaces

“The opportunity is that we have never had a public that is more passionate and obsessed with visual imagery. If the owners of the best original imagery in the world can't figure out how to take advantage of the fact that the world has now become obsessed with these treasures that we have to offer as museums, then shame on us. This is the opportunity to say, if you're spending all day scrolling on Instagram looking for amazing imagery, come and see the original source. Come and see the real work. Let us figure out how to make that connection.”Stephen Reily is the Founding Director of Remuseum, an independent research project housed at Crystal Bridges Museum of American Art in Bentonville, Arkansas. Funded by arts patron David Booth with additional support by the Ford Foundation, Remuseum focuses on advancing relevance and governance in museums across the U.S. He works with museums to create a financially sustainable strategy that is human-focused, centering on inclusion, diversity, and important causes like climate change. During his time as director of the Speed Art Museum in Louisville, KY, Reily presented Promise, Witness, Remembrance, an exhibition in response to the killing of Breonna Taylor and a year of protests in Louisville. In 2022, he co-wrote a book documenting the exhibition. As an active civic leader, Reily has been a part of numerous community organizations and boards, like the Reily Reentry Project, supporting expungement programs for Kentucky citizens, Creative Capital, offering grants for the arts, and founded Seed Capital Kentucky, a non-profit that aims to improve the food economy in the area.A Yale and Stanford Law graduate, Reily clerked for U.S. Supreme Court Justice John Paul Stevens before launching a successful entrepreneurial career, experiences he draws upon for public engagement initiatives.https://remuseum.orghttps://crystalbridges.orgwww.stephenreily.comwww.kentuckypress.com/9781734248517/promise-witness-remembrancewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Sep 11, 202416 min

AI, Curiosity, Cognition & Creativity with Neuroscientist DR. JACQUELINE GOTTLIEB

“We have an onslaught of information the moment we open our eyes. We evolved to deal with an onslaught of information, and we are masters at focusing and ignoring vast amounts of information. Now, AI in this digital age is a relatively new stream of information, which is man-made, so we make it more salient.  So, yes, it's harder to ignore it, but people can learn to ignore it, and indeed, it's a learning process. I think it will also require learning how to teach our children. I mean, we're raising generations of kids who will take AI and the digital world as a given. To them, it will be no different than a chair and a table were to us. So they will learn to not be so distracted by chairs and tables.”Dr. Jacqueline Gottlieb is a Professor of Neuroscience and Principal Investigator at Columbia University’s Zuckerman Mind Brain Behavior Institute. Dr. Gottlieb studies the mechanisms that underlie the brain's higher cognitive functions, including decision making, memory, and attention. Her interest is in how the brain gathers the evidence it needs—and ignores what it doesn’t—during everyday tasks and during special states such as curiosity.www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Sep 4, 202421 min

AI, Cognitive Bias & the Future of Journalism w/ Pulitzer Prize-winning Journalist NICHOLAS KRISTOF

“There have been some alarming experiments that show AI arguments are better at persuading people than humans are at persuading people. I think that's partly because humans tend to make the arguments that we ourselves find most persuasive. For example, a liberal will make the arguments that will appeal to liberals, but the person you're probably trying to persuade is somebody in the center. We're just not good at putting ourselves in other people's shoes. That's something I try very hard to do in the column, but I often fall short. And with AI, I think people are going to become more vulnerable to being manipulated. I think we're at risk of being manipulated by our own cognitive biases and the tendency to reach out for information sources that will confirm our prejudices. Years ago, the theorist Nicholas Negroponte wrote that the internet was going to bring a product he called the Daily Me—basically information perfectly targeted to our own brains—and that's kind of what we've gotten now. A conservative will get conservative sources that show how awful Democrats are and will have information that buttresses that point of view, while liberals will get the liberal version of that. So, I think we have to try to understand those cognitive biases and understand the degree to which we are all vulnerable to being fooled by selection bias. I'd like to see high schools, in particular, have more information training and media literacy programs so that younger people can learn that there are some news sources that are a little better than others and that just because you see something on Facebook doesn't make it true."Nicholas D. Kristof is a two-time Pulitzer-winning journalist and Op-ed columnist for The New York Times, where he was previously bureau chief in Hong Kong, Beijing, and Tokyo. Kristof is a regular CNN contributor and has covered, among many other events and crises, the Tiananmen Square protests, the Darfur genocide, the Yemeni civil war, and the U.S. opioid crisis. He is the author of the memoir Chasing Hope, A Reporter's Life, and coauthor, with his wife, Sheryl WuDunn, of five previous books: Tightrope, A Path Appears, Half the Sky, Thunder from the East, and China Wakes.www.nytimes.com/column/nicholas-kristofwww.penguinrandomhouse.com/books/720814/chasing-hope-by-nicholas-d-kristofFamily vineyard & apple orchard in Yamhill, Oregon: www.kristoffarms.comwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Aug 26, 202416 min

AI, Populism & Consumer Society with Historian FRANK TRENTMANN

“The bridge between Out of the Darkness and my previous work, which looked at the transformation of consumer culture in the world, is morality. One thing that became clear in writing Empire of Things was that there's virtually no time or place in history where consumption isn't heavily moralized. Our lifestyle is treated as a mirror of our virtue and sins. And in the course of modern history, there's been a remarkable moral shift in the way that consumption used to be seen as something that led you astray or undermined authority, status, gender roles, and wasted money, to a source of growth, a source of self, fashioning the way we create our own identity. In the last few years, the environmental crisis has led to new questions about whether consumption is good or bad. And in 2015, during the refugee crisis when Germany took in almost a million refugees, morality became a very powerful way in which Germans talked about themselves as humanitarian world champions, as one politician called it. I realized that there's many other topics from family, work, to saving the environment, and of course, with regard to the German responsibility for the Holocaust and the war of extermination where German public discourse is heavily moralistic, so I became interested in charting that historical process."What can we learn from Germany's postwar transformation to help us address today's environmental and humanitarian crises? With the rise of populism, authoritarianism, and digital propaganda, how can history provide insights into the challenges of modern democracy?Frank Trentmann is a Professor of History at Birkbeck, University of London, and at the University of Helsinki. He is a prize-winning historian, having received awards such as the Whitfield Prize, Austrian Wissenschaftsbuch/Science Book Prize, Humboldt Prize for Research, and the 2023 Bochum Historians' Award. He has also been named a Moore Scholar at Caltech. He is the author of Empire of Things and Free Trade Nation. His latest book is Out of the Darkness: The Germans 1942 to 2022, which explores Germany's transformation after the Second World War.www.bbk.ac.uk/our-staff/profile/8009279/frank-trentmannwww.penguin.co.uk/authors/32274/frank-trentmann?tab=penguin-bookswww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Aug 15, 202415 min

Is AI capable of creating a protest song that disrupts oppression & inspires social change? - JAKE FERGUSON, ANTHONY JOSEPH & JERMAIN JACKMAN

“There's something raw about The Architecture of Oppression, both part one and part two. There's a raw realness and authenticity in those songs that AI can't create. There's a lived experience that AI won't understand, and there's a feeling in those songs. And it's not just in the words from the spoken word artists, if it's not in the instruments that are being played. It's in the voice that you hear. You hear the pain, you hear the struggle, you hear the joy, you hear all of those emotions in all of those songs. And that's something that AI can't make up or create.”Jake Ferguson is an award-winning musician known for his work with The Heliocentrics and as a solo artist under the name The Brkn Record. Alongside legendary drummer Malcolm Catto, Ferguson has composed two film scores and over 10 albums, collaborating with icons like Archie Shepp, Mulatu Astatke, and Melvin Van Peebles. His latest album is The Architecture of Oppression Part 2. The album also features singer and political activist Jermain Jackman, a former winner of The Voice (2014) and the T.S. Eliot Prize winning poet and musician, Anthony Joseph.“I think as humans, we forget. We are often limited by our own stereotypes, and we don't see that in everyone there's the potential for beauty and love and all these things. And I think The Architecture of Oppression, both parts one and two, are really a reflection of all the community and civil rights work that I've been doing for the same amount of time, really - 25 years. And I wanted to try and mix my day job and my music side, so bringing those two sides of my life together. I wanted to create a platform for black artists, black singers, and poets who I really admire. Jermain is somebody I've worked with for probably about six, seven years now. He's also in the trenches of the black civil rights struggle. We worked together on a number of projects, but it was very interesting to then work with Jemain in a purely artistic capacity. And it was a no-brainer to give Anthony a call for this second album because I know of his pedigree, and he's much more able to put ideas and thoughts on paper than I would be able to.”www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Aug 6, 202414 min

The SDGs, AI & UN Summit of the Future - GUILLAUME LAFORTUNE - VP, UN SDSN, Paris

“The SDSN has been set up to mobilize research and science for the Sustainable Development Goals. Each year, we aim to provide a fair and accurate assessment of countries' progress on the 17 Sustainable Development Goals. The development goals were adopted back in 2015 by all UN member states, marking the first time in human history that we have a common goal for the entire world. Our goal each year with the SDG index is to have sound methodologies and translate these into actionable insights that can generate impactful results at the end of the day. Out of all the targets that we track, only 16 percent are estimated to be on track. This agenda not only combines environmental development but also social development, economic development, and good governance. Currently, none of the SDGs are on track to be achieved at the global level.”In today's podcast, we talk with Guillaume Lafortune, Vice President and Head of the Paris Office of the UN Sustainable Development Solutions Network (SDSN), the largest global network of scientists and practitioners dedicated to implementing the Sustainable Development Goals (SDGs). We discuss the intersections of sustainability, global progress, the UN Summit of the Future, and the daunting challenges we face. From the impact of war on climate initiatives to transforming data into narratives that drive change, we explore how global cooperation, education, and technology pave the way for a sustainable future and look at the lessons of history and the power of diplomacy in shaping our path forward.Guillaume Lafortune joined SDSN in 2017 to lead work on SDG data, policies, and financing including the preparation of the annual Sustainable Development Report (which includes the SDG Index and Dashboards). Between 2020 and 2022 Guillaume was a member of The Lancet Commission on COVID-19, where he coordinated the taskforces on “Fiscal Policy and Financial Markets” and “Green Recovery”, and co-authored the final report of the Commission. Guillaume is also a member of the Grenoble Center for Economic Research (CREG) at the Grenoble Alpes University. Previously, he served as an economist at the OECD in Paris and at the Ministry of Economic Development in the Government of Quebec (Canada). Guillaume is the author of 50+ scientific publications, book chapters, policy briefs and international reports on sustainable development, economic policy and good governance.SDSN's Summit of the Future RecommendationsSDG Transformation CenterSDSN Global Commission for Urban SDG Financewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 31, 202415 min

AI & How Utopian Visions Shape Our Reality & Future - Highlights - S. D. CHROSTOWSKA

“There’s the existing AI and the dream of artificial general intelligence that is aligned with our values and will make our lives better. Certainly, the techno-utopian dream is that it will lead us towards utopia. It is the means of organizing human collectivities, human societies, in a way that would reconcile all the variables, all the things that we can't reconcile because we don't have enough of a fine-grained understanding of how people interact, the different motivations of their psychologies and of societies, of groups, of people. Of course, that's another kind of psychology that we're talking about. So I think the dream of AI is a utopian dream that stands correcting, but it is itself being corrected by those who are the curators of that technology. Now you asked me about the changing role of artists in this landscape. I would say, first of all, that I'm for virtuosity. And this makes me think of AI and a higher level AI, it would be virtuous before it becomes super intelligence.”S. D. Chrostowska is professor of humanities at York University, Canada. She is the author of several books, among them Permission, The Eyelid, A Cage for Every Child, and, most recently, Utopia in the Age of Survival: Between Myth and Politics. Her essays have appeared in such venues as Public Culture, Telos, Boundary 2, and The Hedgehog Review. She also coedits the French surrealist review Alcheringa and is curator of the 19th International Exhibition of Surrealism, Marvellous Utopia, which runs from July to September 2024 in Saint-Cirq-Lapopie, France.https://profiles.laps.yorku.ca/profiles/sylwiac/www.sup.org/books/title/?id=33445https://chbooks.com/Books/T/The-Eyelidhttps://ciscm.fr/en/merveilleuse-utopiewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 25, 202420 min

Utopia in the Age of Survival with S. D. CHROSTOWSKA

As Surrealism turns 100, what can it teach us about the importance of dreaming and creating a better society? Will we wake up from the consumerist dream sold to us by capitalism and how would that change our ideas of utopia?S. D. Chrostowska is professor of humanities at York University, Canada. She is the author of several books, among them Permission, The Eyelid, A Cage for Every Child, and, most recently, Utopia in the Age of Survival: Between Myth and Politics. Her essays have appeared in such venues as Public Culture, Telos, Boundary 2, and The Hedgehog Review. She also coedits the French surrealist review Alcheringa and is curator of the 19th International Exhibition of Surrealism, Marvellous Utopia, which runs from July to September 2024 in Saint-Cirq-Lapopie, France.“There’s the existing AI and the dream of artificial general intelligence that is aligned with our values and will make our lives better. Certainly, the techno-utopian dream is that it will lead us towards utopia. It is the means of organizing human collectivities, human societies, in a way that would reconcile all the variables, all the things that we can't reconcile because we don't have enough of a fine-grained understanding of how people interact, the different motivations of their psychologies and of societies, of groups, of people. Of course, that's another kind of psychology that we're talking about. So I think the dream of AI is a utopian dream that stands correcting, but it is itself being corrected by those who are the curators of that technology. Now you asked me about the changing role of artists in this landscape. I would say, first of all, that I'm for virtuosity. And this makes me think of AI and a higher level AI, it would be virtuous before it becomes super intelligence.”https://profiles.laps.yorku.ca/profiles/sylwiac/www.sup.org/books/title/?id=33445https://chbooks.com/Books/T/The-Eyelidhttps://ciscm.fr/en/merveilleuse-utopiewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 25, 202444 min

AI’s Role in Society, Culture & Climate with CHARLIE HERTZOG YOUNG

The planet’s well-being unites us all, from ecosystems to societies, global systems to individual health. How is planetary health linked to mental health? Charlie Hertzog Young is a researcher, writer and award-winning activist. He identifies as a “proudly mad bipolar double amputee” and has worked for the New Economics Foundation, the Royal Society of Arts, the Good Law Project, the Four Day Week Campaign and the Centre for Progressive Change, as well as the UK Labour Party under three consecutive leaders. Charlie has spoken at the LSE, the UN and the World Economic Forum. He studied at Harvard, SOAS and Schumacher College and has written for The Ecologist, The Independent, Novara Media, Open Democracy and The Guardian. He is the author of Spinning Out: Climate Change, Mental Health and Fighting for a Better Future.https://charliehertzogyoung.mehttps://footnotepress.com/books/spinning-out/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 19, 20247 min

The Future of Energy - RICHARD BLACK - Director, Policy & Strategy, Ember - Fmr. BBC Environment Correspondent

Richard Black spent 15 years as a science and environment correspondent for the BBC World Service and BBC News, before setting up the Energy & Climate Intelligence Unit. He now lives in Berlin and is the Director of Policy and Strategy at the global clean energy think tank Ember, which aims to accelerate the clean energy transition with data and policy. He is the author of The Future of Energy; Denied:The Rise and Fall of Climate Contrarianism, and is an Honorary Research Fellow at Imperial College London."I guess no one needs AI in the same way that we need oil or food. So, from that point of view, it's a lot easier. AI is fascinating, slightly scary. I find that the amount of discussion of setting it off in a carefully thought through direction is way lower than the amount of fascination with the latest thing that it can do. Often fiction should be our guide to these things or can be a valuable guide to these things. And if we go back to Isaac Asimov and his three laws of robotics, and to all these three very fundamental points that he said should be embedded in all automata, there's no discussion of that around AI, like none. I personally find that quite a hole in the discourse that we're having.”https://mhpbooks.com/books/the-future-of-energyhttps://ember-climate.org/about/people/richard-blackhttps://ember-climate.orgwww.therealpress.co.uk/?s=Richard+blackwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 15, 202413 min

DIANE VON FÜRSTENBERG: Woman in Charge & How AI Will Change Storytelling w/ Oscar-winning Director SHARMEEN OBAID-CHINOY

Sharmeen Obaid-Chinoy is an Oscar and Emmy award-winning Canadian-Pakistani filmmaker whose work highlights extraordinary women and their stories. She earned her first Academy Award in 2012 for her documentary Saving Face, about the Pakistani women targeted by brutal acid attacks. Today, Obaid-Chinoy is the first female film director to have won two Oscars by the age of 37. In 2023, it was announced that Obaid-Chinoy will direct the next Star Wars film starring Daisy Ridley. Her most recent project, co-directed alongside Trish Dalton, is the new documentary Diane von Fürstenberg: Woman in Charge, about the trailblazing Belgian fashion designer who invented the wrap dress 50 years ago. The film had its world premiere as the opening night selection at the 2024 Tribeca Festival on June 5th and premiered on June 25th on Hulu in the U.S. and Disney+ internationally. A product of Obaid-Chinoy's incredibly talented female filmmaking team, Woman in Charge provides an intimate look into Diane von Fürstenberg’s life and accomplishments and chronicles the trajectory of her signature dress from an innovative fashion statement to a powerful symbol of feminism.“I think it's very early for us to see how AI is going to impact us all, especially documentary filmmakers. And so I embrace technology, and I encourage everyone as filmmakers to do so. We're looking at how AI is facilitating filmmakers to tell stories, create more visual worlds. I think that right now we're in the play phase of AI, where there's a lot of new tools and you're playing in a sandbox with them to see how they will develop.I don't think that AI has developed to the extent that it is in some way dramatically changing the film industry as we speak, but in the next two years, it will. We have yet to see how it will. As someone who creates films, I always experiment, and then I see what it is that I'd like to take from that technology as I move forward.”www.hulu.com/movie/diane-von-furstenberg-woman-in-charge-95fb421e-b7b1-4bfc-9bbf-ea666dba0b02https://www.disneyplus.com/movies/diane-von-furstenberg-woman-in-charge/1jrpX9AhsaJ6https://socfilms.comwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jul 5, 202410 min

Does AI-generated Perfection Detach Us from Reality, Life & Human Connection? - Highlights - HENRY AJDER

“Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.www.henryajder.comwww.bbc.co.uk/programmes/m0017cgrwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 29, 202412 min

How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor

How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality and genuine human connection?Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.“Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”www.henryajder.comwww.bbc.co.uk/programmes/m0017cgrwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 29, 202453 min

How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE

“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.https://leemcintyrebooks.comwww.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyrehttps://mitpress.mit.edu/9780262545051/https://leemcintyrebooks.com/books/the-art-of-good-and-evil/https://leemcintyrebooks.com/books/the-sin-eater/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 18, 202412 min

On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE

How do we fight for truth and protect democracy in a post-truth world? How does bias affect our understanding of facts?Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”https://leemcintyrebooks.comwww.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyrehttps://mitpress.mit.edu/9780262545051/https://leemcintyrebooks.com/books/the-art-of-good-and-evil/https://leemcintyrebooks.com/books/the-sin-eater/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 18, 202454 min

How will AI Affect Education, the Arts & Society? - Highlights - STEPHEN WOLFRAM

"Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.www.stephenwolfram.comwww.wolfram.comwww.wolframalpha.comwww.wolframscience.com/nks/www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20www.wolframphysics.orgwww.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 14, 202412 min

What Role Do AI & Computational Language Play in Solving Real-World Problems?

How can computational language help decode the mysteries of nature and the universe? What is ChatGPT doing and why does it work? How will AI affect education, the arts and society?Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business."Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”www.stephenwolfram.comwww.wolfram.comwww.wolframalpha.comwww.wolframscience.com/nks/www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20www.wolframphysics.orgwww.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 14, 202457 min

Can we have real conversations with AI? How do illusions help us make sense of the world? - Highlights - KEITH FRANKISH

“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.www.keithfrankish.comwww.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionismwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 10, 202411 min

Is Consciousness an Illusion? with Philosopher KEITH FRANKISH

Is consciousness an illusion? Is it just a complex set of cognitive processes without a central, subjective experience? How can we better integrate philosophy with everyday life and the arts?Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”www.keithfrankish.comwww.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionismwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 10, 202457 min

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-millierewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 17, 202410 min

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-milliere“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 17, 20241h 1m

Is understanding AI a bigger question than understanding the origin of the universe? - Highlights, NEIL JOHNSON

“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media. https://physics.columbian.gwu.edu/neil-johnson https://donlab.columbian.gwu.eduwww.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast

May 14, 202415 min

How can physics help solve real world problems? - NEIL JOHNSON, Head of Dynamic Online Networks Lab

How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media.“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”https://physics.columbian.gwu.edu/neil-johnsonhttps://donlab.columbian.gwu.eduwww.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast

May 14, 202450 min

Exploring Consciousness, AI & Creativity with JOSEPH LEDOUX - Highlights

“We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids. www.joseph-ledoux.comwww.cns.nyu.edu/ebihttps://amygdaloids.netwww.hup.harvard.edu/books/9780674261259www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 3, 202414 min

How does the brain process emotions and music? JOSEPH LEDOUX - Neuroscientist, Author, Musician

How does the brain process emotions? How are emotional memories formed and stored in the brain, and how do they influence behavior, perception, and decision-making? How does music help us understand our emotions, memories, and the nature of consciousness?Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids. “We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”www.joseph-ledoux.comwww.cns.nyu.edu/ebihttps://amygdaloids.netwww.hup.harvard.edu/books/9780674261259www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcastMusic courtesy of Joseph LeDoux

May 3, 20241h 0m

Emotional Intelligence in the Age of AI - Highlights - DANIEL GOLEMAN

“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much. AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.www.danielgoleman.infowww.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 26, 202411 min

Author of Emotional Intelligence DANIEL GOLEMAN on Focus, Balance & Optimal Living

How can we enhance our emotional intelligence and avoid burnout in a changing world? How can we regain focus and perform in an optimal state? What do we mean by ecological intelligence?Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much. AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”www.danielgoleman.infowww.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 26, 202453 min

Feminism, Resistance & AI in the Global South w/ INTAN PARAMADITHA - Author of The Wandering

“I've been playing with AI just to see what it can do. People who are not privileged with the skills of conceptualizing, the skills of abstract thinking, they will be replaced. And I'm just thinking about people from the Global South at this moment. People from the Global South  have been working as supporters. They do a lot of support for creative work of entrepreneurs in the Global North. They do social media. They create content and things like that. The people who would provide the support live in, let's say, the Philippines. So, what I'm worried about is how AI technology could take the jobs of people who are not really trained to sort of do conceptual thinking.”Intan Paramaditha is a writer and an academic. Her novel The Wandering (Harvill Secker/ Penguin Random House UK), translated from the Indonesian language by Stephen J. Epstein, was nominated for the Stella Prize in Australia and awarded the Tempo Best Literary Fiction in Indonesia, English PEN Translates Award, and PEN/ Heim Translation Fund Grant from PEN America. She is the author of the short story collection Apple and Knife, the editor of Deviant Disciples: Indonesian Women Poets, part of the Translating Feminisms series of Tilted Axis Press and the co-editor of The Routledge Companion to Asian Cinemas (forthcoming 2024). Her essay, “On the Complicated Questions Around Writing About Travel,” was selected for The Best American Travel Writing 2021. She holds a Ph.D. from New York University and teaches media and film studies at Macquarie University, Sydney.https://intanparamaditha.com www.penguinrandomhouse.ca/books/626055/the-wandering-by-intan-paramaditha/9781787301184www.creativeprocess.info www.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 25, 202411 min

Exploring Science, Music, AI & Consciousness with MAX COOPER - Highlights

“As technology becomes more dominant, the arts become ever more important for us to stay in touch the things that the sciences can't tackle. What it's actually like to be a person? What's actually important? We can have this endless progress inside this capitalist machine for greater wealth and longer life and more happiness, according to some metric. Or we can try and quantify society and push it forward. Ultimately, we all have to decide what's important to us as humans, and we need the arts to help with that. So, I think what's important really is just exposing ourselves to as many different ideas as we can, being open-minded, and trying to learn about all facets of life so that we can understand each other as well. And the arts is an essential part of that.”How is being an artist different than a machine that is programmed to perform a set of actions? How can we stop thinking about artworks as objects, and start thinking about them as triggers for experiences? In this conversation with Max Cooper, we discuss the beauty and chaos of nature and the exploration of technology music and consciousness.Max Cooper is a musician with a PhD in computational biology. He integrates electronic music with immersive video projections inspired by scientific exploration. His latest project, Seme, commissioned by the Salzburg Easter Festival, merges Italian musical heritage with contemporary techniques, was also performed at the Barbican in London. He supplied music for a video narrated by Greta Thunberg and Pope Francis for COP26.In 2016, Cooper founded Mesh, a platform to explore the intersection of music, science and art. His Observatory art-house installation is on display at Kings Cross until May 1st.https://maxcooper.nethttps://osterfestspiele.at/en/programme/2024/electro-2024https://meshmeshmesh.netwww.kingscross.co.uk/event/the-observatoryThe music featured on this episode was Palestrina Sicut, Cardano Circles, Fibonacci Sequence, Scarlatti K141. Music is from Seme and is courtesy of Max Cooper.www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 19, 202413 min

What can music teach us that science can’t? - MAX COOPER - Musician, Fmr. Computational Biologist

How is being an artist different than a machine that is programmed to perform a set of actions? How can we stop thinking about artworks as objects, and start thinking about them as triggers for experiences? In this conversation with Max Cooper, we discuss the beauty and chaos of nature and the exploration of technology music and consciousness.Max Cooper is a musician with a PhD in computational biology. He integrates electronic music with immersive video projections inspired by scientific exploration. His latest project, Seme, commissioned by the Salzburg Easter Festival, merges Italian musical heritage with contemporary techniques, was also performed at the Barbican in London. He supplied music for a video narrated by Greta Thunberg and Pope Francis for COP26.In 2016, Cooper founded Mesh, a platform to explore the intersection of music, science and art. His Observatory art-house installation is on display at Kings Cross until May 1st.“As technology becomes more dominant, the arts become ever more important for us to stay in touch the things that the sciences can't tackle. What it's actually like to be a person? What's actually important? We can have this endless progress inside this capitalist machine for greater wealth and longer life and more happiness, according to some metric. Or we can try and quantify society and push it forward. Ultimately, we all have to decide what's important to us as humans, and we need the arts to help with that. So, I think what's important really is just exposing ourselves to as many different ideas as we can, being open-minded, and trying to learn about all facets of life so that we can understand each other as well. And the arts is an essential part of that.”https://maxcooper.nethttps://osterfestspiele.at/en/programme/2024/electro-2024https://meshmeshmesh.netwww.kingscross.co.uk/event/the-observatoryThe music featured on this episode was Palestrina Sicut, Cardano Circles, Fibonacci Sequence, Scarlatti K141. Music is from Seme and is courtesy of Max Cooper.www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 18, 202450 min

How does a changing climate affect our minds, brains & bodies? - Highlights - CLAYTON ALDERN

"So, I am a data reporter at Grist. And what does that mean? I'm building statistical models of phenomena. I'm writing web scrapers and building data visualizations, right? I have quite a technical job in terms of my relationship with the field of journalism. I just don't think that those tools ought to be put on some kind of pedestal and framed as the be-all end all of the possibility of the field, right? I think that data science, artificial intelligence, and the advent of these new LLMs they're useful tools to add to the journalistic toolkit. We don't know what the ultimate effect of AI is going to be on journalism, but I think journalism is maybe going to look a little bit different in 20 years."Clayton Page Aldern is an award winning neuroscientist turned environmental journalist whose work has appeared in The Atlantic, The Guardian, The Economist, and Grist, where he is a senior data reporter. A Rhodes Scholar, he holds a Master's in Neuroscience and a Master's in Public Policy from the University of Oxford. He is also a research affiliate at the Center for Studies in Demography and Ecology at the University of Washington. He is the author of The Weight of Nature: How a Changing Climate Changes Our Minds, Brains, and Bodies, which explores the neurobiological impacts of rapid environmental change.https://claytonaldern.comwww.penguinrandomhouse.com/books/717097/the-weight-of-nature-by-clayton-page-aldern https://csde.washington.edu www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 16, 202413 min

How climate change is making us sick, angry & anxious - CLAYTON ALDERN - Neuroscientist turned Eco-Journalist

How does a changing climate affect our minds, brains and bodies?Clayton Page Aldern is an award winning neuroscientist turned environmental journalist whose work has appeared in The Atlantic, The Guardian, The Economist, and Grist, where he is a senior data reporter. A Rhodes Scholar, he holds a Master's in Neuroscience and a Master's in Public Policy from the University of Oxford. He is also a research affiliate at the Center for Studies in Demography and Ecology at the University of Washington. He is the author of The Weight of Nature: How a Changing Climate Changes Our Minds, Brains, and Bodies, which explores the neurobiological impacts of rapid environmental change."So, I am a data reporter at Grist. And what does that mean? I'm building statistical models of phenomena. I'm writing web scrapers and building data visualizations, right? I have quite a technical job in terms of my relationship with the field of journalism. I just don't think that those tools ought to be put on some kind of pedestal and framed as the be-all end all of the possibility of the field, right? I think that data science, artificial intelligence, and the advent of these new LLMs they're useful tools to add to the journalistic toolkit. We don't know what the ultimate effect of AI is going to be on journalism, but I think journalism is maybe going to look a little bit different in 20 years."https://claytonaldern.comwww.penguinrandomhouse.com/books/717097/the-weight-of-nature-by-clayton-page-aldern https://csde.washington.edu www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Apr 16, 202453 min

Consciousness, AI & Creativity with DUSTIN O’HALLORAN - Emmy Award-winning Composer

What will happen when Artificial General Intelligence arrives? What is the nature of consciousness? How are music and creativity pathways for reconnecting us to our humanity and the natural world?Dustin O’Halloran is a pianist and composer and member of the band A Winged Victory for the Sullen. Winner of a 2015 Emmy Award for his main title theme to Amazon's comedy drama Transparent, he was also nominated for an Oscar, a Golden Globe, and a BAFTA for his score for Lion, written in collaboration with Volker Bertelmann (aka Hauschka). He has composed for Wayne McGregor (The Royal Ballet, London), Sofia Coppola’s Marie Antoinette, Ammonite starring Kate Winslet, and The Essex Serpent starring Claire Danes. He produced Katy Perry’s “Into Me You See” from her album Witness and appears on Leonard Cohen’s 2019 posthumous album Thanks For The Dance. With six solo albums under his name, his latest album 1 0 0 1, which explores ideas of technology, humanity and mind-body dualism, is available on Deutsche Grammophon.“The album 1 0 0 1 is really like a journey from our connection with nature to where we are now, in this moment where we're playing with technology. We're almost in this hybrid space, not fully understanding where it's going. And it's very deep in our subconscious and probably much greater than we realize. And it sort of ends in this space where the consciousness of what we're creating, it's going to be very separate from us. And I believe that's kind of where it's heading – the idea of losing humanity, losing touch with nature and becoming outside of something that we have created."https://dustinohalloran.com/www.deutschegrammophon.com/en/artists/dustin-o-halloranwww.imdb.com/name/nm0641169/bio/?ref_=nm_ov_bio_smMusic courtesy of Dustin O’Halloran and Deutsche Grammophonwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 29, 202451 min

How can music help us expand our understanding of consciousness & AI? - Highlights - DUSTIN O’HALLORAN

“The album 1 0 0 1 is really like a journey from our connection with nature to where we are now, in this moment where we're playing with technology. We're almost in this hybrid space, not fully understanding where it's going. And it's very deep in our subconscious and probably much greater than we realize. And it sort of ends in this space where the consciousness of what we're creating, it's going to be very separate from us. And I believe that's kind of where it's heading – the idea of losing humanity, losing touch with nature and becoming outside of something that we have created."Dustin O’Halloran is a pianist and composer and member of the band A Winged Victory for the Sullen. Winner of a 2015 Emmy Award for his main title theme to Amazon's comedy drama Transparent, he was also nominated for an Oscar, a Golden Globe, and a BAFTA for his score for Lion, written in collaboration with Volker Bertelmann (aka Hauschka). He has composed for Wayne McGregor (The Royal Ballet, London), Sofia Coppola’s Marie Antoinette, Ammonite starring Kate Winslet, and The Essex Serpent starring Claire Danes. He produced Katy Perry’s “Into Me You See” from her album Witness and appears on Leonard Cohen’s 2019 posthumous album Thanks For The Dance. With six solo albums under his name, his latest album 1 0 0 1, which explores ideas of technology, humanity and mind-body dualism, is available on Deutsche Grammophon.https://dustinohalloran.com/www.deutschegrammophon.com/en/artists/dustin-o-halloranwww.imdb.com/name/nm0641169/bio/?ref_=nm_ov_bio_smMusic courtesy of Dustin O’Halloran and Deutsche Grammophonwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 29, 202410 min

The Pursuit of Happiness - JEFFREY ROSEN - President & CEO of the National Constitution Center

"There's no more empowering act for me than simply turning the devices off. The simple rule that I'm not allowed to browse in the morning until I've done my reading has opened up worlds. So much of tech and the net are designed to capture our attentions, to turn us into consumers rather than citizens, to fan our base passions and emotions, and to send us down rabbit holes. That the best thing we can do is to turn it off.'The pictures in our minds,' I guess that was Walter Lippmann, are confirmed by the enlightenment empiricists like John Locke, who insists that our reality is shaped by our external sensations and what we put into our minds. And then, of course, we are what we think. Life shaped by the mind, as The Dhammapada states. And then, the great injunction that my dad used to quote from Paracelsus, 'As we imagine ourselves to be, so shall we be.' “Jeffrey Rosen is President and CEO of the National Constitution Center, where he hosts We the People, a weekly podcast of constitutional debate. He is also a professor of law at the George Washington University Law School and a contributing editor at The Atlantic. Rosen is a graduate of Harvard College, Oxford University, and Yale Law School. He is the author of seven previous books, including the New York Times bestseller Conversations with RBG: Justice Ruth Bader Ginsburg on Life, Love, Liberty, and Law. His essays and commentaries have appeared in The New York Times Magazine; on NPR; in The New Republic, where he was the legal affairs editor; and in The New Yorker, where he has been a staff writer. His latest book is The Pursuit of Happiness: How Classical Writers on Virtue Inspired the Lives of the Founders and Defined America.https://constitutioncenter.org/about/board-of-trustees/jeffrey-rosenwww.simonandschuster.com/books/The-Pursuit-of-Happiness/Jeffrey-Rosen/9781668002476https://constitutioncenter.org/news-debate/podcastswww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 26, 202412 min

Can AI help us understand animal language? - Author SY MONTGOMERY & Illustrator MATT PATTERSON

"I would love it if AI could decode some animal languages that humans have not been able to do, like the whistles and clicks of whales and dolphins. Our human limit limitations have blinded us to so much of what animals are saying and telling us.More than anything, though, and I don't know if AI can do this, but we need something to talk our leaders into having some sense about preserving our world. Anything that AI can bring to ameliorate global climate change, to catch the poachers who are killing turtles and other wildlife, and anything AI can teach us about how not to consume the entire world like some horrible fire...let's leave some space for the animals.”Author Sy Montgomery and illustrator Matt Patterson are naturalists, adventurers, and creative collaborators. Montgomery has published over thirty acclaimed nonfiction books for adults and children and received numerous honors, including lifetime achievement awards from the Humane Society and the New England Booksellers Association.Patterson’s illustrations have been featured in several books and magazines, such as Yankee Magazine and Fine Art Connoisseur. He is the recipient of Roger Tory Peterson Wild American Art Award, National Outdoor Book Award for Nature and the Environment, and other honors. Most recently, Patterson provided illustrations for Freshwater Fish of the Northeast.Their joint books are Of Time and Turtles: Mending the World, Shell by Shattered Shell and The Book of the Turtle. Montgomery’s other books include The Soul of an Octopus, The Hawk’s Way and The Secrets of the Octopus (published in conjunction with a National Geographic TV series).www.mpattersonart.comhttps://symontgomery.comwww.harpercollins.com/products/of-time-and-turtles-sy-montgomery?variant=41003864817698www.harpercollins.com/products/the-book-of-turtles-sy-montgomery?variant=40695888609314https://press.uchicago.edu/ucp/books/book/distributed/F/bo215806915.htmlwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 21, 202414 min

Revolutionizing Sustainability: BERTRAND PICCARD's Path to a Cleaner Planet - Highlights

"So it's what you do with the invention that's important. And with AI, it's exactly the same. If you make deep fakes, I think you can just destroy trust and confidence in the world because you will never know what is true and what is false, but if you use AI to balance the grid, to incorporate renewable energies that are intermittent, the storage, the usage by consumers, then you begin to be much more efficient because you use energy at the right moment, in the right way, at the right place, for the right people, you will save a lot of energy. So, in the end, it's always human behavior that decides if an invention is good or not. What I would really like to avoid is AI being used for useless things."Bertrand Piccard is a notable Swiss environmentalist, explorer, author, and psychiatrist. His ventures include being the first to travel around the world in a non-stop balloon flight and years later in a solar-powered airplane. He is regarded as a pioneer in clean technology. Piccard is also the founder of the Solar Impulse Foundation, which has identified over 1500 actionable and profitable climate solutions and connects them with investors. As a UN Ambassador for the Environment, his goal is to convince leaders of the viability of a zero-carbon economy, which he will demonstrate via his next emission-free project Climate Impulse, a green hydrogen-powered airplane that can fly nonstop around the earth.http://www.solarimpulse.comhttps://climateimpulse.org/https://bertrandpiccard.com/Photos:Bertrand Piccard with Ilham Kadri, CEO Syensqo (main technological partner of Climate Impulse)Bertrand Piccard @ Solar Impulse, Jean Revillard

Mar 14, 202411 min

Beyond the Horizon: Pioneering Green Aviation with BERTRAND PICCARD - Aviator, Explorer, Environmentalist

What is the future of green aviation? How do we share environmental solutions to unite people and change the climate narrative from sacrifice and fear to enthusiasm and hope?Bertrand Piccard is a notable Swiss environmentalist, explorer, author, and psychiatrist. His ventures include being the first to travel around the world in a non-stop balloon flight and years later in a solar-powered airplane. He is regarded as a pioneer in clean technology. Piccard is also the founder of the Solar Impulse Foundation, which has identified over 1500 actionable and profitable climate solutions and connects them with investors. As a UN Ambassador for the Environment, his goal is to convince leaders of the viability of a zero-carbon economy, which he will demonstrate via his next emission-free project Climate Impulse, a green hydrogen-powered airplane that can fly nonstop around the earth."So it's what you do with the invention that's important. And with AI, it's exactly the same. If you make deep fakes, I think you can just destroy trust and confidence in the world because you will never know what is true and what is false, but if you use AI to balance the grid, to incorporate renewable energies that are intermittent, the storage, the usage by consumers, then you begin to be much more efficient because you use energy at the right moment, in the right way, at the right place, for the right people, you will save a lot of energy. So, in the end, it's always human behavior that decides if an invention is good or not. What I would really like to avoid is AI being used for useless things."http://www.solarimpulse.comhttps://climateimpulse.org/https://bertrandpiccard.com/Photos:COPSummitBertrand Piccard with Simon Stiell, Executive Secretary of the UNFCCCVille de Demain exhibition, Cité des sciences et de l'industrie, Paris

Mar 14, 202455 min

Who were the Neanderthals? - Highlights - DR. LUDOVIC SLIMAK

"AI is a fascinating question. You know children are sponges. They look and say this is something different. So your values are no longer good enough for the future. And this is what we are confronted with with AI. And that's a fantastic tool, but at a certain moment, this technology will evolve and become super efficient and smarter than we are. And at this moment, our children could simply reject everything that makes us human. And our society at this moment, and maybe that of our humanity, could collapse on itself. I begin the book with a question of intelligence outside of Earth. That could be AI, that could be extraterrestrials. This is fascinating for us because this is another intelligence. Now, we have created AI, and we are fascinated by what we see because we can discuss with an AI and it's very clear that the AI understands our concepts and responds with our own concepts."Ludovic Slimak is a paleoanthropologist at the University of Toulouse in France and Director of the Grotte Mandrin research project. His work focuses on the last Neanderthal societies, and he is the author of several hundred scientific studies on these populations. His research has been featured in Nature, Science, the New York Times, and other publications. He is the author of The Naked Neanderthal: A New Understanding of the Human Creature.http://ww5.pegasusbooks.com/books/the-naked-neanderthal-9781639366163-hardcoverhttps://lampea.cnrs.fr/spip.php?article3767www.odilejacob.fr/catalogue/sciences-humaines/archeologie-paleontologie-prehistoire/dernier-neandertalien_9782415004927.phpwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 12, 202414 min

Will human efficiency destroy the planet and us? - DR. LUDOVIC SLIMAK - Author of The Naked Neanderthal

Who were the Neanderthals? And what can our discoveries about them teach us about intelligence, our extractivist relationship to the planet, and what it means to be human?Ludovic Slimak is a paleoanthropologist at the University of Toulouse in France and Director of the Grotte Mandrin research project. His work focuses on the last Neanderthal societies, and he is the author of several hundred scientific studies on these populations. His research has been featured in Nature, Science, the New York Times, and other publications. He is the author of The Naked Neanderthal: A New Understanding of the Human Creature."AI is a fascinating question. You know children are sponges. They look and say this is something different. So your values are no longer good enough for the future. And this is what we are confronted with with AI. And that's a fantastic tool, but at a certain moment, this technology will evolve and become super efficient and smarter than we are. And at this moment, our children could simply reject everything that makes us human. And our society at this moment, and maybe that of our humanity, could collapse on itself. I begin the book with a question of intelligence outside of Earth. That could be AI, that could be extraterrestrials. This is fascinating for us because this is another intelligence. Now, we have created AI, and we are fascinated by what we see because we can discuss with an AI and it's very clear that the AI understands our concepts and responds with our own concepts."http://ww5.pegasusbooks.com/books/the-naked-neanderthal-9781639366163-hardcoverhttps://lampea.cnrs.fr/spip.php?article3767www.odilejacob.fr/catalogue/sciences-humaines/archeologie-paleontologie-prehistoire/dernier-neandertalien_9782415004927.phpwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Mar 12, 202456 min

What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI

“I write science fiction, so it's fascinating from a technological standpoint, but we have dozens and dozens of years of science fiction warning us about technology unchecked. The irony is that now so many of those science fiction stories have probably been used to feed the AI training algorithms that they are now repurposing and ripping off. So it's very ironic in that regard to me. I've heard artists refer to AI as a plagiarism machine, and I do think that's a very apt descriptor. I have a lot of friends who are affected by this. And these tech companies think if we can make it easier and cheaper to capture some aspect of the human spirit and then, by God, isn't that best for shareholders?” -Kyle HigginsKyle Higgins is an Eisner award-nominated #1 New York Times best-selling comic book author and award-winning filmmaker known for his work on DC Comics’ Batman titles as well as his critically-acclaimed reinventions of Mighty Morphin Power Rangers for Boom! Studios/Hasbro, Ultraman for Marvel Comics, and his creator-owned series Radiant Black, NO/ONE and Deep Cuts for Image Comics. Kyle is the founder and creative director of Black Market Narrative and The Massive-Verse.Karina Manashil is the President of MAD SOLAR. After graduating from Chapman University with a BFA in Film Production, she began her career in the mailroom at WME where she became a Talent Agent. In 2020, she partnered with Scott Mescudi and Dennis Cummings to found MAD SOLAR. Its first release was the documentary “A Man Named Scott” (Amazon), and she then went on to Executive Produce Ti West trilogy “X,” “Pearl” and “MaXXXine” (A24). Manashil received an Emmy nomination as an Executive Producer on the Netflix animated event “Entergalactic." She also produced the Mescudi/Kyle Higgins comic book “Moon Man” which launched through Image Comics. She is next producing the upcoming Mescudi/Sam Levinson/The Lucas Bros film “HELL NAW” (Sony) and the animated feature “Slime” from auteur animator Jeron Braxton.moonmancomics.com https://imagecomics.comhttps://www.imdb.com/name/nm3556462/?ref_=fn_al_nm_1www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcastAudio of Scott Mescudi courtesy of Mad Solar and Moon Man Comic Book Release and Revenge Of (Eagle Rock, CA, Jan 31, 2024)

Feb 27, 202414 min