PLAY PODCASTS
FIR #478: When Silence Isn’t Golden
Episode 478

FIR #478: When Silence Isn’t Golden

For Immediate Release · Neville Hobson and Shel Holtz

August 25, 20251h 30m

Audio is streamed directly from the publisher (traffic.libsyn.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

For a while, businesses were flexing their social responsibility muscles, weighing in on public policy matters that affected them or their stakeholders. These days, not so much, with leaders fearing reprisal for speaking out. But silence can have its own consequences. Also in this episode: The gap between AI expectations and reality; rent-a-mob services damage the fragile reputation of the public relations profession; too many people think AI is conscious, so we have to devise ways to reinforce among users that it’s not; Denmark is dealing with deepfakes by assigning citizens the copyright to their own likenesses; crediting photographers for the work you copied from the web won’t protect you from lawsuits for unauthorized use. In Dan York’s Tech Report, Dan shares updates on Mastodon’ (at last) introducing quote posts, and Bluesky’s response to a U.S. Supreme Court ruling upholding Mississippi’s law making full access to Bluesky (and other services) contingent upon an age check.

Links from this episode:

Links from Dan York’s Tech Report:

The next monthly, long-form episode of FIR will drop on Monday, September 29.

We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

Special thanks to Jay Moonah for the opening and closing music.

You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

Raw Transcript:

@nevillehobson (00:02)
Hello everyone and welcome to Four Immediate Release. This is episode 478, the monthly long-form edition for August 2025. I’m Neville Hobson.

Shel Holtz (00:14)
And I’m Shel Holtz, and we have six reports for you today. Hope you find them illuminating. And if you find any of them worthy of comment, I would hope that you would comment on them. There are a number of ways to comment on the content that you hear on for immediate release. You can send us an email to fircomments at gmail.com and attach an audio file if you like. You can record that audio file.

On the FIR website, there’s a tab on the right-hand corner. It says record voicemail and you can record up to 90 seconds. You can record more than one. We know how to edit those things together. So send us your audio comments, but you can also leave comments on the show notes at FIRpodcastnetwork.com.

on the posts we make at LinkedIn and Facebook and threads and blue sky and mastodon. You can comment on the FIR community on Facebook. There are lots of ways that you can share your opinion with us so that we can bake those into the show. And we also appreciate your ratings and reviews. So with those comment mechanisms out of the way Neville, let’s.

hear about the episodes that we have recorded since our last monthly episode.

@nevillehobson (01:33)
We did five since then. Actually, it was four plus the last monthly. So we’ll start with that one. It’s episode four, 74 for July, the long-form episode. That one ran one hour, 33 minutes. So a bit shorter than we usually do for the month, which is about hefty, hefty but good, as Donna would say. Yeah, exactly.

Shel Holtz (01:52)
We were terse.

@nevillehobson (01:55)
So we covered a number of topics related to AI, was how we titled the episode Show Notes. AI is redefining public relations, driving a change in the way we craft press releases, PR is at the heart of AI optimization and more. Good discussion. had lots of topics. The links are brilliant. Lots of content we linked to in that episode.

Then we followed that. That was on the 28th of July that was published. On the 29th, the day after that, we published an FIR interview with Monsignor Paul Tai of the Vatican. That was on AI ethics and the role of humanity. It’s actually an intriguing topic. We dove into a document called Antiqua et Nova that was really the anchor point for the conversation that talked about

the comparison of human intelligence with artificial intelligence and that drove that discussion. He was a great guest on the show, Shell, and it’s intriguing. There’s more coming about that in the coming weeks, the way, because I’ve been posting follow-ups to that in little video clips from that interview and there’s more of that kind of thing coming soon. So we have a comment, right?

Shel Holtz (03:06)
Every do.

We do, from Mary Hills out of Chicago. She’s an IABC fellow who says, insightful and stimulating discussion. Thank you for the extraordinary host team for making this happen and Monsignor Tai for sharing his insights. To the question, my view as a ComPro is to build bridges to discover options to move forward and choose the best way. Think discursive techniques, sociopositive climates, and our ability to synthesize data and information.

It taps into those intangible assets we bring to our work and are inherently in us.

@nevillehobson (03:45)
Good comment. Reminds me, the way, related to what you talking about, how to comment before we started this, is most of the comments we seem to get, certainly in the last six months, if not more, have been on LinkedIn. It’s a great place for discussion, but that’s a business network. You need to be a member to see them. So if you’re not a member and you want to comment, join LinkedIn, otherwise you won’t be able to.

Shel Holtz (03:56)
Mm-hmm.

It’s free.

@nevillehobson (04:07)
Yeah, it is. So then next, well, you’ve got a paid option, but generally it’s free unless you take out the paid option. I’ve got the paid option too, just as a little aside there. So we followed that on the 4th of August, episode 475, title of the post, algorithms got you down, get retro with RSS. The rise of social media news feeds had rendered RSS useful for many people, said, and declining usage led Google to sunset it.

Shel Holtz (04:09)
Not for me, I pay for mine, but.

Yeah, that’s right. Exactly.

@nevillehobson (04:34)
But RSS feeds never went away. And we explored that a bit. Most people don’t know that all the newsletters they subscribe to, the sub-stacks or whatever publication it is, RSS is driving a lot of how they get the content that they include in those publications. So it’s part of the plumbing. And it always has been. even now, people don’t think about this. But we had an interesting perspective on that, on how to use RSS afresh in a slightly different way.

476 on the 12th of August rewiring the consulting business for AI. We reviewed the actions of several firms and agencies and discussed what might come next for consultants. There’s been a change, almost literally changing business models with the rise of AI, agentic AI in particular. So we explored that, a good conversation. And finally, 477 on the 18th of August, de-sloppifying Wikipedia. That’s a heck of a.

descriptor you put in the headline that’s de-sloppifying. Wikipedia introduced a speedy deletion policy for AI slop articles. It’s actually a bigger deal than most of us would realize if we ever thought about. Wikipedia, the user generated content encyclopedia is running or rather is addressing or has been trying to address for a while.

Shel Holtz (05:30)
I’m glad you liked it.

@nevillehobson (05:51)
the rise of AI generated content that makes it very difficult in a collaborative editing environment with volunteer editors that is all about consensual agreement to change or addition. That takes a while. This is at light speed by comparison to that procedure. They’re coming up with a speedy deletion policy and that’s getting some discussion too. But Wikipedia,

is an important place online. It has been for long time, a kind of a natural first place that shows up when you’re looking for information about a company, an individual, whatever it might be, a subject of some type. so trust is key to what you see there. So we’ve had quite a bit of a conversation on that. that wraps up what we’ve been doing since the last episode.

Shel Holtz (06:42)
We did have a comment on 477, this from Mark Hillary, who says, got to say I’m not familiar with Trust Cafe.

@nevillehobson (06:44)
Oh, we did. You’re right. We did. Yes. Yep.

Okay, good comment.

No, me neither. That was Mark Henry. I’m surprised I didn’t leave a comment in reply to him because I know him, but obviously I didn’t see the comments at the time.

Shel Holtz (07:02)
Now have to go look that up.

Well, it’s waiting. It won’t go anywhere. We also, in the last week, recorded the most recent episode of Circle of Fellows, the monthly panel discussion with four fellows of the International Association of Business Communicators. This was episode 119 of this monthly panel discussion, and it was on sustainability, communicating sustainability.

@nevillehobson (07:15)
Yeah. Okay.

Shel Holtz (07:37)
The panel included Zora Artis from Australia, Bonnie Kaver from Texas, Brent Carey from Toronto, and Martha Muzyczka from the Far East of Canada. The next circle of fellows is scheduled for September 18th at 10 a.m. I tell you all of this because you can watch it in real time and participate in the conversation. This is going to be about hybrid communications and hybrid workplaces.

This will be moderated by Brad Whitworth and three of the four panelists have been identified so far, Priya Bates, Andrea Greenhouse and Ritzy Ronquillo. So, so far, Brad, the moderator is the only American on that panel. Priya from Toronto, ⁓ Andrea from Toronto and Ritzy from the Philippines. So it’ll be a good international discussion on hybrid and.

That will lead us into our reports for this month, right after this.

But one of the biggest workplace stories right now is the widening gap between the promise of AI and the reality employees are living day to day. The headlines have been flooding the zone lately. MIT researchers report that 95 % of generative AI pilots in companies are failing. The New York Times recently noted that businesses have poured billions into AI without seeing the payoff.

And Gartner’s latest hype cycle has generative AI sliding into the famous trough of disillusionment. By the way, that MIT report is worth a healthy dose of skepticism. They interviewed something like 50 people to draw those conclusions. But the trend is pretty clear. The number of pilots that are succeeding in companies is definitely on the low end. But while companies wrestle with ROI, employees are wrestling with something more personal.

uncertainty.

Few research found that more than half of US workers worry about AI’s impact on their jobs, while most actually haven’t actually used AI at work much yet. NBC reported that despite the hype, there’s little evidence of widespread job loss so far. Still, the fears are real, and they’re being compounded by mixed signals inside organizations. Here’s one example I read about. A sales team was told to make AI part of every proposal.

but they weren’t offered any guidance, any training, any process change. As a result, some team members just kind of quietly opened ChatGPT and used it to generate some bullet points. Others copied old proposals and slapped on an AI enhanced label. A few admitted they just pretended to use AI to avoid looking like they were behind the curve, which by the way, lines up with a finding from HR Dive that one in six workers say they pretend to use AI because of workplace pressure.

That’s not innovation, that’s performance theater. This is where communicators need to step in. Employees don’t need more hype, they need transparency. They need to hear that most pilots fail before they succeed. They need clarity about how AI will really fit into their workflows and they need reassurance that the company has a plan for reskilling, not just replacing its people.

So for managers, and I am a firm believer that we need to work with managers to help them communicate with their employees, here’s a simple talk track you can put in their hands right away. So share this with managers on your teams. First, AI is a tool we’re still figuring out your input on what works and what doesn’t is critical. Second, we’re not expecting you to be experts overnight. Training and support will come before requirements. And third,

Your job isn’t disappearing tomorrow. Let’s focus on how these tools can take that busy work off your plate. And for communicators thinking about the next 30 days, consider a quick communication action plan. On week one, launch a listening tour. Ask employees how they feel about AI and where they see potential. Week two, share those findings in plain language, including what employees are worried about. Week three,

Host AI office hours with your IT team or HR partners to answer real questions. And on week four, publish a simple playbook. What’s okay, what’s not? How employees will be supported as the tech evolves. That should help you cut through the hype while keeping employees engaged. The technology may still be finding its footing, but if communicators help employees feel informed, supported, and included,

The organization will be in a far better position to capture real value when AI does start delivering on its promises at the enterprise level.

@nevillehobson (12:22)
Interesting statistics there, Shell. Listening to that advice you gave, just made me think straight away that, and indeed looking at the HR dive reports in particular with what they’re talking about, 75 % of workers said they’re expected to use AI at work, they say, whether officially or unofficially. That’s a bit alarming, I think.

Some people said they feel pressured and uncomfortable, and some said they pretend to use it rather than push back. So that’s part of the landscape. And that seems to me to be what needs addressing first and foremost, because if that is the situation in some organizations, then communications got a real uphill struggle to persuade employees to do all the things that you mentioned.

So, you know, the comms team could do all those things. Week one, we do this. Week two. But unless you get the engagement from employees that makes it worthwhile doing that is not worthwhile doing, if the culture in the organization says that you’re not really seeing the right support from leaders. So that is probably the fundamental that needs addressing. It’s a sad fact, isn’t it? If that is the climate still that leads to this kind of reporting.

I don’t hear similar in the UK, but then again, there’s not so much, I don’t think so much kind of research going on as there is in the US, plus the numbers are smaller here. This is very US centric. This one in HR Dive is a thousand people they talk to. Nearly 60 % said they use AI daily. I’m surprised that might be higher than that. So that’s all part of the picture there. That makes it a real struggle to implement what you’ve suggested.

What do you think? it a real hurdle?

Shel Holtz (14:08)
⁓ I think it is a real hurdle. And I think one of the things that we need to acknowledge is that leaders in organizations who are driving the adoption of AI, let’s be clear. It’s not IT behind the AI push. It’s leaders who see the potential for doing more with less and earning more and everything else that AI has promised are jamming it down the organization’s throats.

I have mentioned before on the show that I recently read a book called How Big Things Get Done. It’s mainly about building. It’s written by a Danish engineer professor who has the world’s largest database of mega projects. But the conclusion that he draws is that projects that succeed are the ones where they put all of the time into the planning upfront. If you jump right into the building, you get

disasters like the California high speed rail and the Sydney Opera House, which I didn’t realize was a disaster until I read about it. But my God, what a disaster. And the ones that succeed are the ones that spend the time on the planning. The Empire State Building went up in I don’t remember if it was two years. I mean, it was it was fast, but they put a lot of time into what we call pre-construction. And I think that’s not happening with AI.

in the enterprise right now. think there are leaders who are saying we have to be AI first. We have to lean into AI. We need to start generating revenue and cutting headcount. So let’s just make it happen. And there’s no planning. There’s no setting the environment for employees. There’s very little training. Although I do see that there is a shift in.

the dollars that are being invested in AI moving to the employee side and away from the tool side, which is heartening. employees are concerned about this because they’re not getting the training. They’re not getting the guidance. They’re not seeing the plan. All they’re hearing is, we got to start using this. And I think that would leave people concerned. think that explains a lot of the angst that we’re hearing about.

among employees.

@nevillehobson (16:19)
Yeah, that makes sense. mean, again, just glancing through these statistics in the HR Dive Report, interesting, the contrast that I’m reading. It says 84 % of workers said they feel more productive using AI. 71 % said they know how to use it efficiently. They report less burnout, less work stress, better job satisfaction. Nearly a third said they feel less lonely.

Shel Holtz (16:33)
that would be me, by the way.

@nevillehobson (16:43)
Those are the ones who’ve developed a relationship with CHAP GPT, I know. And a quarter said they collaborate more. Four in particular, I was right there, I tell you. But then in contrast, some workers said they’re struggling to keep up. One in four feeling often or always overwhelmed with AI developments. And the third said that learning, using, and checking AI takes as much time as their previous approach to work. So 25 % of those expected to use AI at work said they have received no training.

Shel Holtz (16:47)
Yeah, the 4.0 in particular,

@nevillehobson (17:11)
Another 25 % said they did receive training, and a third were given dedicated time at work to learn AI skills. So it’s not all bad. That’s a fact. But it goes on, they’re some people in Deloitte about AI development. Disconnect has emerged, where some people are pretending to understand the tech and others declined to prioritize it. So you’ve got a real mixed bag of landscapes, if you like, that need, well.

To me, seems that you need to identify this and figure out how you’re to address it. Because the conflict, well, the contrast of diet seems to me, you’ve got high percentage of them saying they’re more productive. Others struggling to keep up. Others don’t get any training at all. You mentioned those examples you gave of construction examples, like the Empire State Building going up real fast.

The reality with AI is that this is, I mean, to coin a corny phrase again, I suppose, is things are developing at light speed, things are happening so fast that it is hard to keep up with it. So the pressure is there, particularly in the kind of more relaxed environments today, more informality, less formality, where you can’t, the control has vanished from top down.

and that anyone can get access to information about literally anything, just go online. And so people are finding out about these things. They’re exposed to, this is the latest AI, look at this one, and they hear from their peers and so forth. And unless you’ve got a credible resource that is appealing to people, they’re going to do their own thing. Particularly, they don’t feel they’re getting any kind of support on how to implement all this stuff. So this is quite a

a challenge for communicators. But I think it’s a bigger challenge organizationally in leadership where you’ve got this challenge that doesn’t seem to being addressed by many companies. And I would stress that this is not widespread. I don’t see anything in here that tells me this is the majority overall in in organization in the US, in spite of some of these percentages that suggest otherwise. But it is a it is definitely a situation that is not good for the organization. And surely

that must be apparent to everyone, I suppose.

Shel Holtz (19:32)
You know, I would hope, but I would also hope that communicators step up and start documenting what’s going on in their organizations and feeding that back up, uh, representing the employee voice to the leadership of the organization. So maybe that they’ll start taking a step back and thinking about how we do this strategically, because it hasn’t been strategic to this point. As employees read about these claims of 95 % pilot failure.

Those who are not really enthusiastic about AI will be able to use that as an excuse for not embracing it. Well, it doesn’t work anyway, and it’s not really making a difference, and companies aren’t achieving any ROI. So why should I spend time on this? It’s probably going to be gone in six months, right? And I was listening to an interview with Demis Asabas, the CEO of Google DeepMind.

And this is on the Lex Friedman podcast, long two and a half hour interview, but great. one of the things that he talked about is, and as Lex Friedman brought it up, he said, I have a friend who studies cuneiform, ancient images carved on stone, right? And he didn’t know a thing about AI. He barely heard about it. And…

@nevillehobson (20:41)
Chip show.

Shel Holtz (20:48)
It was a sabbath who made the point. said, you know, there are a lot of us who are talking about this and enthusiastic and you know, if you spend time on X, for example, everything is AI all the time and we lose sight of the fact that there is a huge part of the population that is blissfully unaware of all of this still. So there’s that to deal with too.

@nevillehobson (21:10)
challenge without doubt.

Okay, so speaking of AI, one of the big AI stories this month comes from Mustafa Suleiman, the CEO of Microsoft AI. He’s written a long essay with a striking title, We Must Build AI for People, Not to Be a Person. In it, he raises a concern about what he calls seemingly conscious AI. These are systems that won’t actually be conscious, but will be so convincing.

Shel Holtz (21:16)
Are we?

@nevillehobson (21:40)
that people will start to treat them as if they are. He argues that this isn’t a distant science fiction scenario. With today’s models, long-term memory, and the ability to generate distinct personalities, it could arrive in just a few years. Already some people project feelings onto their chatbots, seeing them as partners, friends, or even divine beings. We’ve been hearing a lot about that recently. I’ll hold my hand up. I had a great relationship with my good friend and assistant, ChatGPT 4.0.

I was not happy with the move to chat GPT-5, which ditched all of that. And I felt like I was talking to someone I didn’t know at all or who didn’t know me. So I get that. But Suleiman in his essay warns that this trend could escalate into campaigns for AI rights or AI citizenship, which would be a dangerous distraction, he says. Consciousness, he points out, is at the core of human dignity and legal personhood, confusing this by attributing it to machines.

risks creating new forms of polarization and deep social destruction. But what stood out most for me wasn’t the alarm over AI psychosis that some commentators have picked up on. It was Suleiman’s North Star. He says his goal is to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. He describes Microsoft’s generative AI chatbot, Copilot, as a case study.

millions of positive, even life-changing interactions every day, carefully designed to avoid overstepping into false claims of consciousness or emotion. He argues that companies need to build guardrails into their systems so that users are gently reminded of AI’s boundaries, that it doesn’t actually feel, suffer, or have desires. This is all about making AI supportive, useful, and empowering without crossing into the illusion of personhood.

Now this resonates strongly in my mind with our recent FIR interview with Monseigneur Paul Tai from the Vatican. He too emphasized that AI must be in service of humanity, not replacing or competing with it, but reinforcing dignity, ethics and responsibility. And it echoes strongly something I wrote following the publication of the FIR interview about the wisdom of the heart, the core idea that we should keep empathy, values and human connection.

the center of AI adoption. It’s a central concept in Antiqua et Nova, the Vatican’s paper published earlier this year, comparing artificial intelligence and human intelligence. So while the headline debate might be about whether AI can seem conscious, the bigger conversation, and the one I think we really should have, is how we ensure that AI is built in ways that help us be more human, not less. What strikes me is how Suleiman pulled tie in even our own conversations.

all point in the same direction. AI should serve people, not imitate them. But in practical terms, how do we embed that principle in the way businesses and communicators talk about AI? Thoughts?

Shel Holtz (24:43)
It’s an interesting conundrum, largely because we are told by experts like Ethan Molluck, the professor out of the Wharton School in Pennsylvania, who is one of the leading posters on LinkedIn about AI and AI research, that the best way to get great results from AI is to treat it like a human, engage in conversation with it, and

I find that to be true. find that giving it a prompt and getting a response and letting it go with that is not nearly as good as a conversation, ⁓ a back and forth, asking for refinements and additions and posing questions and the like. And the more we have conversations with it and treat it like a human, the easier it’s going to be to slide down that slope into perceiving it.

to be a person. I think that’s, we’re hearing a lot of people who do believe that it’s conscious already. I mean, not among the AI engineering community, but you hear tales of people who are convinced that there is a consciousness there and there is absolutely not. But it mimics humanity pretty well and is gonna get much, much better at it.

As as Malik said, at any point, the the tool that you’re using today is the worst one you’ll ever use because they’re just going to continue to get better. So getting people to not see them as conscious, I think is going to be a challenge. And it’s not one that I think a lot of people are thinking about much. Looking at the.

productivity gains and other dimensions of this. Certainly looking at the harm, I mean, there’s a lot of conversation out there among the do-mers as they’re called and what kind of safety measures are being considered as these models are evolving. But specifically this issue of treating it like a human thinking of it.

as a person with a consciousness, I don’t think there’s a lot of attention being paid to that and what the steps are going to be to mitigate it.

@nevillehobson (26:52)
Yeah, interesting. think I have great respect for Ethan Molyke, I must admit. I read a lot of what he says, but I utterly disagree entirely with this whole point about you must treat it as if it is a person. That’s completely and utterly counter to the whole notion of the wisdom of the heart, which I think is a magnificent way to look at this.

aware and all your thinking the new dignity of the human being is at the center of what we do with AI. So we do not pretend it’s like a human at all. It is a tool that we can build a relationship with, but we don’t consider it to be like a person at all. but it’s not about how it develops. The point is, how do we develop

Shel Holtz (27:33)
sure, difference between considering it

@nevillehobson (27:41)
in how we use this, not how it’s developing, because we are the ones who are enabling it to develop through all the tools and activities we go on. And the missing piece in all of that is what about the people? What about the humanity here? Where everyone who talks about this, and Ethan Molyk seems to one of those too, it’s about the benefits we get from using an AI. It’ll make more money. It gives us better market share.

We enable people to do these things better, et cetera, et cetera. And yet, reflecting on your report just prior to this, there are many people in organizations who feel ignored, who feel overwhelmed, who are unhappy with this. There’s not enough explanation of what the benefits are. And those tend to be couched in. These are the benefits for the organization and the employees who work there and the customers who buy our products and so forth. So I think.

we have to develop a way of thinking that gives a different focus to this than we are being pressured to accept, I suppose you could argue. There are strong voices arguing this. I get that. And like you said, to which I truly find it extraordinary that there are people who say, yeah, they’re sentient. These are like humans. Not at all. They’re algorithms, a bit of software. That’s it. So…

This is not about a Luddite approach to technology at all. It’s not about thinking out, it’s like the Terminator and Skynet and all that kind of stuff. No, not at all. It’s the moral and philosophical lens that is missing from all of this. And so that is what we need to develop into our conversations about this is that element of it that is missing largely everywhere you look.

Shel Holtz (29:27)
It is. I still think that most of the time I’m engaging with a model, I’m having a conversation with it. I if I’m looking for a simple fact, I’ll go to perplexity and get my answer. But if I’m developing a strategy, for example, which is something that I use AI to help me with, I’ll tell you, I have created a custom GPT that is a senior communication consultant. took me about four hours.

to build this out with all of the instruction set. I don’t have the budget to work with a consulting organization and there’s nobody who is higher in the hierarchy than me in communications where I work. So if I wanna bounce my ideas off a senior communications professional, I had to create one. So I did.

And I didn’t give it a name. I know Steve Crescenzo has one, he named Ernie after Ernest Hemingway, but I didn’t name mine, but I’ll go have conversations with it about the strategy that I am considering. And it works really well and it works best when I treat it like a consultant, when I have that conversation. That’s what I coded it to be. I didn’t code it, I gave it the instructions. And I think it’s this behavior on top of the fact that you have character AI and you have…

@nevillehobson (30:33)
Right.

Shel Holtz (30:40)
Facebook and Metta introducing characters that you can engage with that are designed to be people. And you have the therapists now that are coming, AI therapists, and they’re all designed to behave and engage with you like people. And I don’t have a problem with that. This is a tool and this is one of the things that it does well, but how do we keep front of mind among people?

that while you’re doing this, you need to remember that it is not ⁓ a person and it is not conscious. I just want to say that in our intranet, when I sign onto our network in the morning, I have to click OK on a legal disclaimer. Every single time I turn my company laptop on, shouldn’t we have something like perhaps a disclaimer before you start interacting with these that this is a very lifelike, human-like experience that you’re about to have? Keep in mind, it’s not.

@nevillehobson (31:10)
Well, that’s the whole point.

No, absolutely. I think I do the same show. I’ve talked about this a lot over the last couple of years on this show elsewhere. I treat my chat GPT assistant like a person, but I do not. call it I have a name. Jane is what I call the chat GPT one. I don’t see it as a real person at all. Far from it. I’m astonished, frankly, that some people would think this is a person I’m talking to. And come on, for Christ’s sake, it’s an algorithm.

So yet it enables me to have a better interaction with that AI assistant if I can talk to it the way I do, which is like I’m talking to you now almost almost the same. But the bit that’s missing, and I think this is the heart of what Paul Tai was talking about, quoting from Antiqua et Nova, and I think this is the core part of the reflection of all this. must not lose sight of the wisdom of the heart.

which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people. And that has got to underpin everything that we do. And as I noted in my kind of rambling post I did write, it was actually better than the one I did the first draft, I must admit. It’s not a poetic flourish, it’s a framing. That’s the thing that we’re missing. We mustn’t see

AI is a neutral tool. It’s not really because we shape it and we need to encourage critical reflection on that human dignity. Wisdom can’t be reduced to data. The Vatican says that ethical judgment comes from human experience, not machine reasoning. Totally agree with that. So, I mean, this is to me the start of this conversation, really. And I think the kind of wisdom

or the thinking, certainly not wisdom, seems to me, the thinking that is the counter to that, such as what you outlined, is very powerful, is embedded almost everywhere you look. So I looked at this myself and think, OK, fine, I’m not going to evangelize this to anyone at all. I know what I’m going to do as far as I’m concerned. And that made me feel very comfortable that I’m going to follow the principles of this myself, which I have been doing for a while now, that is, in a sense, reflective in the world of algorithms and automation. What does it mean to remain human?

So I’ve changed how I use the AIs, I must have, and maybe chat GPT-5 happened at the time I started making that change. That is something I’ve started talking to people about. Did you think about this? How do you feel about that? And seeing what others think. And I’ve yet to encounter anyone who would say, this is amazing. What that’s saying, it makes total sense to me, let’s do this. No one’s saying that that I’ve talked to. So.

It’s something that I think interviews, the interview we did, others that Paul Tai is doing, and what I’m seeing increasingly other people starting to talk about, is the framing of it within this context. That’s where I think we need to go. We need to bring this into organizations. So ⁓ an invitation to reflect, let’s say, that, yes, this is great, what’s going on, and you’re doing this, you need to also pause and think about it from this perspective as well. That’s what I think.

Shel Holtz (34:34)
Mm-hmm.

I would not disagree. And a lot of the development that’s happening in AI is focused on benefiting humanity. I’m looking at the scientific and medical research that it’s able to do. mean, just the alpha fold, which won the Nobel Prize for Demisys Abis is to benefit people. Where it’s probably benefiting people less is in business.

@nevillehobson (34:59)
Thank

Shel Holtz (35:06)
Because as you say that it needs to benefit people, think most business leaders think it needs to benefit profitability. And that could be at the expense of people.

@nevillehobson (35:16)
Well, it’s actually not about

benefiting people in that sense, because yes, it is. It’s about reintroducing, in a sense, conscience, care and context into thinking about what AI can do that is related to efficiency scale and all those business benefits. That’s not people oriented at all. No matter how they dress it up, saying, well, you employees are going to be more effective. No, it means that our share price will go up from a public listed company, we’ll get paid more money and all that kind of stuff. That’s what drives all of that, seems to

Shel Holtz (35:35)
Mm-hmm.

@nevillehobson (35:45)
And I’m not saying it’s wrong, not by any means. In a capitalistic economy, for instance, as we’re all in, it isn’t wrong. But it’s missing this part of the jigsaw puzzle. And it’s hard to quantify it. And I know one person had a conversation with someone who give me the ROI on this. thought, whoa, you’re right there with the wrong way to think about this. But we have to. And I think this is really just, I would say to me, an invitation to reflect on how you’re thinking about this, not necessarily to…

change it but reflect on it bring into this what does it mean to remain human in this world of algorithms and automation where things move so fast and the the ROI acronym is right there in the middle of

Shel Holtz (36:26)
Yeah, it reminds me of the late great shell Israel asking, what’s the ROI of my pants? Remember that?

Does we do we need ROI on everything?

@nevillehobson (36:35)
He would have loved the wisdom of the heart,

I tell you, he would.

Shel Holtz (36:39)
Yeah,

yeah, he was very skeptical of the need for ROI for everything. Hence, what’s the ROI of my pants? Of course, somebody came up with the ROI of pants. I remember that too. Insofar as determining what would happen if he went to work without wearing any versus the cost of pants for a year. ⁓ Yeah. All right, well, let’s away from.

@nevillehobson (36:50)
It’s super.

There’s some are away there, that’s a fact, yeah. Cool. Yep.

Shel Holtz (37:03)
AI and talk about more traditional public relations matters. The term rent-a-mob gets thrown around a lot in political discourse, usually as a way to delegitimize real opposition. But behind the rhetoric, there’s a very real, very troubling practice of paying people to pose as protesters to create the illusion of grassroots support.

And that practice is alive and well, and some firms, including companies that present themselves as PR or advocacy agencies, provide it openly. Crowds on demand, for example, has made no secret that it will recruit and script protesters calling the service advocacy campaigns or event management. I thought event management was like hiring the band and making sure the valet people showed up on time.

If all this sounds like a modern twist on an old tactic, it is for sure. From free whiskey and George Washington’s day to front groups created by big tobacco in the 90s, engineered public opinion has a long history. What’s new is the professionalization of the practice. Today, you can literally hire a firm to stage a rally, a counter protest, or a city council hearing appearance. It’s a service for sale and the bill goes to the client. Legally,

This all sits in a very gray zone. U.S. law requires disclosure for campaign advertising, for paid lobbying, but there’s no equivalent requirement for paid protesters. If you buy a TV ad, you have to disclose who paid for it. If you hire lobbyists, they have to disclose who they’re working for. But if you pay 200 people to show up at City Hall and protest, there’s no federal law that requires anyone to disclose that fact. That’s the protest loophole. Ethically, though,

There is no gray area whatsoever. PRSA’s code of ethics is clear. Honesty, disclosure, and transparency are non-negotiable. The code explicitly calls out deceptive practices like undisclosed sponsorships and front groups. IABC’s code says much the same. Accuracy, honesty, respect for audiences. Paying people to pretend to care about a cause or policy fails those tests.

The fact that it’s not illegal doesn’t make it acceptable. It just makes it a bigger risk for the profession because when the practice is exposed, as it inevitably is, the credibility of public relations is what takes the hit. And it does get exposed. In one case, retirees were recruited to hold signs at a protest they didn’t understand. In another, college students were promised easy money to show up and chant at a rally.

These are not grassroots activists. They’re actors in somebody else’s play. And when the story surfaces in the press, it’s not just the client who looks bad. It’s the agency and then by extension, the rest of the industry. So let’s be clear. Rent-a-mob tactics are not clever. They’re not innovative and they’re not public relations. They are deception. They turn authentic public expression into a commodity and they undermine democracy itself.

If our job is to build trust between organizations and their publics, this is the opposite of that. Here’s the call to action. PR professionals must refuse this work. Agencies should set policies that forbid it and train staff on how to respond if they’re asked. Use the PRSA code of ethics as your shield and point to IEBC standards as backup. And don’t just say no, educate your clients about why it’s wrong and how badly it can backfire.

because agencies can get pulled into this even without realizing it. A subcontractor or consultant may arrange the crowds, but the agency’s name is still on the campaign. That’s why vigilance is critical. Build those guardrails now. At the end of the day, this comes down to the disconnect between what the law allows and what ethics demands. Just because a tactic falls into a regulatory loophole doesn’t mean we should touch it. The opposite.

is true. It means communicators must hold themselves to the higher standard, because public trust is already fragile. If we let paid actors masquerade as genuine voices, we’ll find we have no real voices left at the end of the day.

@nevillehobson (41:20)
So the word that comes to my mind readily, listen to what you’re saying and then look at some of the links you get is astroturfing. So remember that? I mean, that was a big deal. I remember you and I talking about that a lot in the first few years when we started this podcast from 2005 onwards. A couple of campaigns I remember being run by PR bloggers as was the primary social network at the times to address that.

Shel Holtz (41:29)
sure.

@nevillehobson (41:49)
⁓ So nothing’s really changed. I mean, one of the links you included was from a woman called Mary Beth West, who wrote a post just a couple of days ago, where she’s actually…

Shel Holtz (41:57)
We interviewed

Mary Beth West on the show, by the way. Yeah.

@nevillehobson (42:00)
we did? Okay.

So she criticizing very strongly PRSA in the US primarily, remaining silent on the issue. And she says they are therefore complicit in this quite strong accusation that but

Shel Holtz (42:13)
She is PRSA’s

fiercest critic.

@nevillehobson (42:17)
Right, right, okay. But I just wonder why is it that from a communications perspective, whether it’s PR or another element of communication, that these sort of issues pop up and so forth, and yet they repeat what was going on decades prior. AVE is a great one, advertising value equivalence, that was banned by professional bodies well over 15, maybe two decades ago, and yet people still use it.

So what is it about this that we can’t seem to… it’s like whack-a-mole, something else pops up all the time. So this astroturfing version 6, let’s call it, because there’s got to be at least five versions prior to this, how do we stop it?

Shel Holtz (43