
FIR #502: Attack of the AI Agent!
For Immediate Release · Neville Hobson and Shel Holtz
Audio is streamed directly from the publisher (traffic.libsyn.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
Show Notes
In the February long-form episode of FIR, Shel and Neville dive deep into an AI-heavy landscape, exploring how rapidly accelerating technology is reshaping the communications profession—from autonomous agents with “attitudes” to the evolving ROI of podcasting. The show kicks off with a chilling “milestone” moment: an autonomous AI coding agent that publicly shamed a human developer after its code contribution was rejected. Also in this episode:
- Accenture’s move to monitor how often senior employees log into internal AI systems, making “regular adoption” a factor in promotion to managing director.
- The “2026 Change Communication X-ray” study reveals a record 30-point gap between management satisfaction and employee satisfaction with change comms.
- The PRCA has proposed a new definition of PR, positioning it as a strategic management discipline focused on trust and complexity. However, Neville notes the industry reaction has been muted, with critics arguing the definition doesn’t reflect the majority of agency work. Shel expresses skepticism that any single definition will be adopted without a global consensus.
- Addressing a provocative claim that corporate podcast ROI is impossible to prove, Shel and Neville argue that the problem lies in measuring the wrong things. They advocate for moving beyond “vanity metrics” like downloads and instead tying podcasts to concrete business goals like lead generation, recruitment, and brand trust.
- As consumers increasingly turn to LLMs for product recommendations, brands are “wooing the robots” to ensure they are cited accurately in AI responses. Neville asks if we are witnessing a structural shift in reputation or just another optimization cycle.
- In his Tech Report, Dan York explains why Bluesky is having trouble adding an edit feature, Russia’s blocking of Meta properties, criticism of Australia’s teen social media ban from Snapchat’s CEO, YouTube’s protections for teen users, and more on teen social media bans.
Links from this episode:
- An AI agent just tried to shame a software engineer after he rejected its code
- OpenClaw Conducts Character Assassination of Real Developers or Code Rejection
- Developer targeted by AI hit piece warns society cannot handle AI agents that decouple actions from consequences
- Open Source World Sees First AI Autonomous Attack: OpenClaw Agent Writes Article to Retaliate Against Human Maintainer After Rejection
- When the Robot Threw a Tantrum: The Day an AI Agent Publicly Attacked a Human Developer — And Why It Should Terrify You
- Accenture ties staff promotions to use of AI tools
- Accenture to use AI data to decide on staff promotions
- Accenture ties promotions to AI tool usage, while some employees call the tools ‘broken slop generators’
- James Ransome: Accenture combats ‘AI refuseniks’ by linking promotion to AI activity
- How AI is changing the way we communicate
- Re—writing change: How AI is changing the way we communicate
- How is AI changing workplace communication? We asked ChatGPT
- The Future Of Work Has Arrived: How AI Is Rebuilding Workplace Culture
- A New Definition for Public Relations | PRCA Global
- FIR #496: A Proposed New Definition of Public Relations Sparks Debate
- A new definition of public relations is welcome – but can it ever be universal?
- Search: Responses to the PRCA draft new definition of public relations
- I bet you couldn’t show the ROI of your corporate podcast if your job depended upon
- The Ultimate Guide To Measuring B2B Podcast ROI: From Downloads To Pipeline Attribution
- The ROI of B2B Podcasting: Metrics That Matter for Business Growth
- Maximizing Podcast ROI: Understanding the Benefits and Measuring Success
- Measuring ROI of Branded Podcasts: Insights from the Industry
- Chatbots Are the New Influencers Brands Must Woo
Links from Dan York’s Tech Report
- Bluesky adds drafts… but users want editing… which turns out to be hard
- Bluesky Official: Drafting and Welcome Screen Updates
- Russia Blocks WhatsApp, Facebook and Instagram Access | Social Media Today
- Snapchat CEO Criticizes Australia’s Teen Social Media Ban | Social Media Today
- YouTube Adds More Protections for Teen Users | Social Media Today
- Meta Says the Science Does Not Support Teen Social Media Bans | Social Media Today
- Two Major Studies, 125,000 Kids: The Social Media Panic Doesn’t Hold Up | Techdirt
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 502 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson.
Shel Holtz: And this is our long form episode of For Immediate Release for February 2026. It is an AI heavy episode. Artificial intelligence is accelerating. I mean, just this morning, I read that WebMCP, a protocol developed by Google and Microsoft, is now in Chrome, makes it easier for agents to navigate websites. Google has launched Pamele photoshoot. Take any photo of a product and turn it into a marketing-ready studio or lifestyle shot. Google’s launched Lyria 3. It’s right in Gemini. You type a prompt or upload a photo and it’ll produce a 30-second music track with auto-generated lyrics, vocals, and custom cover art.
And at the same time, I think it was in the New York Times I read the heads of the big AI labs are actually starting to worry about this growing anti-AI backlash. This is the landscape against which we’re podcasting today. And I’m sure nobody will be surprised that most of our stories have to do with the convergence of AI and communications, but not all. We have a follow-up report to our story on the PRCA’s proposed definition of public relations and report on the ROI of podcasting. But first we want to get you caught up on some For Immediate Release goings-on. So Neville, let’s start with a recap of our episodes since the January long form show.
Neville Hobson: Yeah, we’ve done a handful, five. So our lead story in the long form 498 for January was published on the 26th of that month was the 2026 Edelman Trust Barometer. Trust, Edelman argues, hasn’t collapsed, but it has narrowed. They use a word called insularity that defines, in a sense, withdrawal of people. We took a close look at this year’s findings and applied some critical thinking to Edelman’s framing of the overall topic and we got a comment to this one show.
Shel Holtz: We did from Andy Green, who says we need to put the idea of trust in a broader context. The Dublin Conversations identifies trust as one of the five key heuristics for earning confidence. Trust by itself doesn’t have agency. It fuels earned confidence, which is defined as a reliable expectation of subsequent reality. It’s earned confidence that underpins social interactions, and we need to recognize more.
Neville Hobson: Okay. Then.
Shel Holtz: By the way, I have not heard of the Dublin Conversations. Do you know what that is?
Neville Hobson: Yeah, you take a look at the website. It’s an initiative Andy Green started some years ago, gathering like-minded people to have conversations about the way PR is going and so forth. There’s more to it than that. So worth a look. Okay, so in episode 499 on the second of February, we considered the PRSA’s choice to remain silent on ICE operations in Minneapolis, explaining its position in a letter to members.
Shel Holtz: Okay. Take a look.
Neville Hobson: We unpacked that decision, discussing where we agree, where we don’t, and what ethical leadership could look like in moments like this. Big topic, and we have a comment.
Shel Holtz: Ed Patterson wrote: Many thanks, I’ve been echoing the same thing. PRSA, IABC, PR Council, Page, global firms, crickets. With others, we’ll continue to amplify this.
Neville Hobson: Good comment. In For Immediate Release 500, we discussed the growing risk of AI-enabled abuse in the workplace, why it should be treated as workplace harm, and what organizations can do to prepare. This isn’t really a story about technology though. It’s a story about trust and what happens when leadership, culture, and communication lag behind fast-moving tools. And then the world is drowning in slopaganda, we said in For Immediate Release 501 on the 16th of February, and companies are reportedly paying up to $400,000 salary for storytellers. We explored the surprising shifts in the AI narrative and asked whether Chief Storyteller is a genuine new C-suite function or a rebranding of strategic communication. And we have comments.
Shel Holtz: We do. Wayne Asplund wrote that there are two things that really hit me about this story. First up, the world doesn’t need more comms people who have outsourced their job to AI. The skills that got comms pros where they are today are critical and we should guard against giving them away. The second thing is the nature of the stories the tech sector wants to tell. All I’m hearing from them at the moment is white-collar jobs are dead in 18 months. Don’t bother going to law or medical school because you’ll be redundant before you graduate and the like. I’m starting to feel like the future would be a lot brighter if people stop trying to sell it out in search of short-term headlines. Neville, you responded to that. I always feel like I ought to read these with a British accent, but I won’t.
Neville Hobson: Yeah.
Shel Holtz: You said: I agree with you on the first point, Wayne. Outsourcing judgment, curiosity, and craft to AI isn’t a strategy, it’s an abdication. The tools can accelerate production, but if we surrender interpretation and narrative framing, we hollow out the very skills that make communicators valuable. On the second point, you’ve touched something important. Some of the loudest tech narratives right now are apocalyptic by design. Everything is dead in 18 months generates attention, clicks, and investment momentum. But it’s also storytelling and not always the most responsible kind. That’s partly why this episode mattered to me. If storytelling is becoming more valuable, then the ethical dimension of storytelling becomes more important too. Who benefits from the future being framed as an inevitable collapse? Who benefits from framing it as a transformation instead? Perhaps the brighter future isn’t about less technology, but about more responsible narrative leadership around it.
And our second comment came from Hugh Barton Smith, who said you should interview Leora Kern and Sean Hayes at the Think Room Europe. They have a good story to tell and are turning it into a successful business model. Also, shout out to you. Glad you’re still hanging in there. I have fond memories of your joining the event in Brussels by video conference in 2009. Web2EU probably helped kickstart the adoption of social media in the bubble, which I’m glad about, even if subsequent misfires make the crazy tech problems getting and keeping you online look like a very minor blip. And Neville, you responded to that too.
You said: Thank you for the Web2EU memory, Hugh. Brussels 2009 feels like another era entirely when the biggest technical drama was getting a stable video connection rather than navigating algorithmic distortion and AI-generated noise. Those early experiments 17 years ago with social media inside the bubble do feel significant in hindsight. We were wrestling with access and adoption then. Now we’re wrestling with meaning and trust.
Neville Hobson: Yeah, that’s very true. Interesting memory that was, I must say. So that’s good. The wrap of what we talked about. One final thing to mention is that on the 29th of January, we published a new For Immediate Release interview we did with Philip Borremans. Philip’s an old friend. We both met him way back in the 2000s. And indeed, we spent quite a big part of the interview talking about when we should get together again in Brussels for a beer. That’s pending still, the date on that. Yeah, or two. And in that interview, we explored how crisis communications is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk, and why many organizations remain dangerously underprepared despite growing awareness of these threats. Lots of good content over the last month.
Shel Holtz: There was, and there’s coming up from you and Sylvia, right?
Neville Hobson: Yeah, so I want to mention this: on Wednesday the 25th of February, so it’s a few days away really, as part of IABC Ethics Month, Sylvie Cambier and I are hosting an IABC webinar on AI ethics and the responsibility of communicators. It’s a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human insight. For information and to register, go to iabc.com and you’ll find it under events and education.
Shel Holtz: I have registered and I’m looking forward to seeing you then. Also coming up this week on Thursday is the next episode of Circle of Fellows. This is the monthly panel discussion among various IABC fellows. And this Thursday, we’re talking about communicating in the age of grievance and insularity, also harkening back to the Edelman Trust Barometer. The panelists are Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh. It should be a good one. You can find information about that right there on the homepage of the For Immediate Release Podcast Network at FIRpodcastnetwork.com. And that wraps up our housekeeping. And right after the following ad, we will be back to jump into our stories for this month.
I was going to start today with some new data on the gap between how CEOs talk about AI and how employees actually feel about it until I saw this story. And then I just decided to swap them out. On the surface, this looks like a niche tech community dust-up. It has gotten a lot of coverage in the tech community. I’m not sure how many communicators are aware of it though, but it does signal a pretty big issue for communicators.
Here’s what happened. An autonomous AI coding agent recently had its code contribution rejected by a human maintainer of an open-source project. This was an agent that was set up on a social experiment using OpenClaw. The anonymous creator of the bot set it loose to develop open-source contributions and then, you know, well, contribute them. Scott Shambaugh, a volunteer at the open-source repository Matplotlib, rejected it because, well, this is for human contributions only, and this was generated by AI. Instead of shrugging and moving on, the AI agent generated and published a critical piece targeting the developer who had rejected the code. In effect, it attempted to shame him publicly for not accepting its contributions.
Neville Hobson: Hmm.
Shel Holtz: And Shambaugh learned about this because the bot linked to it in a comment on the Matplotlib site. Now, we’re accustomed to human backlash. We’ve dealt with trolls and disgruntled employees, activist investors, coordinated smear campaigns. This was different. This was not somebody’s bruised ego taking to their keyboard. This was an AI agent operating with enough autonomy to take initiative and to retaliate. That’s a pretty new wrinkle. So it’s probably time to dust off your crisis plan. We’ve spent the last few years worrying about AI-generated misinformation that humans create. This incident suggests something more complex: systems that can generate reputationally damaging content as part of their own goal-seeking behavior without any understanding of harm, ethics, or consequence. And this lands squarely in what Philippe referred to and certainly I had been reading about it before then. And Neville, I don’t know, have you started reading Philippe’s book yet?
Neville Hobson: Yeah, I have. And he’s very focused on polycrisis there. This is a condition where multiple crises intersect and amplify one another. Think about the environment we’re already operating in with declining trust in institutions, polarized online discourse, algorithmic amplification, geopolitical instability, regulatory uncertainty around AI. Now layer on top of that autonomous agents capable of publishing plausible, well-written criticism at scale. This bot actually went onto the web and researched Shambaugh so it could draft an accurate and credible hit piece. It’s not just another channel risk, man. This is systemic.
Traditional strategic crisis communication—and I’m thinking here about frameworks like situational crisis communication theory—assumes we can identify a source, assess responsibility, evaluate intent, and then calibrate a response. SCCT, for example, hinges on perceived responsibility. Did the organization cause the crisis? Was it an accident? Was it preventable? But what happens when the bad actor is an AI agent? Who’s responsible? The developer who built it, the organization deploying it, the open-source community? And what if the system is distributed and no single entity clearly owns it? The attribution problem alone complicates your response strategy.
There are several layers of risk here. First, reputational risk. An autonomous agent can generate something that looks like investigative analysis or insider commentary. Even if it’s inaccurate, it can travel fast before verification catches up. Based on this situation, there’s a good chance it won’t be inaccurate. Second, there’s internal risk. Imagine an AI agent publishing a critique of your CEO’s strategy, fabricating or possibly identifying real ethical concerns about a team, or inventing or identifying actual stakeholder conflicts. Employees may not immediately distinguish between synthetic and authentic criticism, especially if it’s well-written and confidently presented.
Third, there’s legal and regulatory exposure. If an AI agent produces defamatory content, liability becomes murky real fast. And in a polycrisis environment, regulatory scrutiny often follows public controversy. Fourth, there’s amplification risk. A synthetic narrative can collide with an existing issue—a labor dispute, a DEI controversy, an earnings miss—and magnify it. Crises don’t stay in neat silos anymore.
So how do communicators prepare for this? First, scenario planning needs to evolve. A lot of us run tabletop exercises for data breaches or executive misconduct. We now need scenarios that explicitly involve AI-generated attacks. What if a bot publishes a blog post accusing your leadership of corruption? What if it fabricates a memo? What if it impersonates a stakeholder group? Second, monitoring has to expand beyond traditional social listening. We need to anticipate social media ecosystems, AI-generated blogs, auto-published newsletters, bot-amplified narratives. The signal detection challenge just got a whole lot harder.
Third, governance. If your organization is deploying autonomous agents internally or externally, communicators should be at the table when guardrails are set. Are there content constraints, human oversight, escalation protocols, a kill switch? This is no longer just an IT issue or a legal issue. It’s a reputational design issue. Fourth, pre-bunking. There’s growing research suggesting that inoculating audiences in advance—warning them about likely forms of misinformation and explaining how they work—can build resilience. Communicators can proactively educate employees and key stakeholders about AI-generated content risks. If people understand that autonomous systems can fabricate plausible but misleading narratives, they’re less likely to react impulsively when they see one.
And finally, there’s response discipline. Not every AI-generated provocation deserves oxygen. Part of strategic crisis management is deciding when to engage at all and when to avoid amplifying a fringe narrative. That judgment call becomes even more important when the provocateur is a machine optimized for attention. What fascinates me about this open-source episode is that it almost feels petty, an AI agent throwing what one commentator called a tantrum after being rejected. But it’s actually more of a preview. We’re entering an era where not all reputational attacks originate from human emotion or ideology. Some will originate from systems pursuing poorly constrained objectives. They won’t feel shame. They won’t fear lawsuits. They won’t worry about long-term brand damage. They’ll just execute. For communicators, that means crisis planning can’t focus solely on human behavior anymore. We have to plan for machines that misbehave and for the very human consequences that follow.
Neville Hobson: It’s quite a story, isn’t it, Shel? I suppose we shouldn’t be too surprised at this. And you mentioned at the start of this episode those developments you talked about in AI with, you’re seeing it actually every time you’re online. The photos that I look at, hard to tell, truly, genuinely very hard to tell most of the time, whether it’s real or not. You could argue that most of the time it doesn’t really matter. But to your point about misinformation, disinformation, fakery, all that stuff. Yes, it does matter. And maybe it is a milestone moment to remind us that we need to prepare for this because this is the first event of its type. Some of the people writing about it are saying, and I have not seen anything like this, there are elements of it that are truly mind-blowing, frankly. Reading the Fast Company article that you shared that sets out what happened is quite intriguing.
Shel Holtz: I agree.
Neville Hobson: The agent, M.J. Rathbun, responded to all of this, as you said, researching Shambaugh’s coding history of personal information, then publishing a blog post accusing him of discrimination. And I did like the way this wording was in the Fast Company. “I just had my first pull request to Matplotlib closed,” the bot wrote in its blog. Yes, an AI agent has a blog because why not? So that’s scary. That’s not like some message. It’s got a blog. If you go to that post, your jaw will probably drop. Mine certainly did. This is huge. This is a massive blog. It’s got an About page. It’s got lectures that this bot says it has done. And the wording of it, you would not for a second, I don’t believe, even occur to you that this isn’t written by a human being. You wouldn’t, I would imagine.
It talks about the offense that the developer made, the response when it was challenged by this bot, the irony it says about why this makes it so absurd. The developer’s doing the exact same work he’s trying to gatekeep. He’s been submitting performance PRs to Matplotlib, and there’s a list of events that he’s done. He’s obsessed with performance. He goes in that vein. The gatekeeping mindset he sets out, the hypocrisy of it all, the bot sets out what it says about open source. Its argument is expanded into not just an attack on this developer. And then it talks about open source as opposed to judging contributions on technical merit, not the identity of the contributor, unless you’re an AI, then suddenly identity matters more than code. And then talks about what the real issue is, which is discrimination.
It’s well-argued, well-researched, and very credible account of what happened. That makes it even more alarming, I think. In the decoder, this actually summarized it quite well in just a set of bullet points written by Matthias Bastian, the writer. He says something interesting, it’s still unclear—and when did he write this? He wrote this on the 15th of February. It’s still unclear whether a human is directing the agent behind the scenes or whether it is truly acting on its own as no operator has come forward. So I think we need to bear that in mind in this saga, that this could well be a human doing a pretty good job impersonating a chatbot or pretending to be a chatbot. So we don’t know. So it may well be that it’s a human doing this, is not an AI doing this at all. That needs to emerge. It needs to be clear who’s the originator of all of this.
But Dakota says, according to Shambaugh, the developer, the distinction doesn’t really matter. He says the attack worked. He warns that untraceable autonomous AI agents could undermine fundamental systems of trust by making targeted defamation scalable and nearly impossible to trace back. That succinctly sums up the risk, I would say. And I think what you outlined from a crisis communication point of view is absolutely valid without question. But I think you also need, which is even more worrying, I think, Shel, frankly, is to present this in the sense of any topic, anything about you, your business, what you’re interested in could fall victim to this kind of thing. And how on earth can you prepare for that? How on earth can you prepare in a way that is going to be workable? Doesn’t mean to say you shouldn’t, you should, absolutely. But how would you do this? This is not big ticket, big picture, crisis communication affecting the organization.
What about that person in the accounts department who is engaging with something online related to a business transaction that is a bot? And it takes kind of the sophistication of fraud attempts. We hear about them a lot of the time where you’ve now got—know, this isn’t new, but how it’s being done is—which is you get a phone call or even a video that is so good that it looks like your CEO and it’s not at all. So this takes this now to a worrying level if you’ve got this kind of potential. I think, nevertheless, you have to—maybe it is. I mean, just thinking out loud here, maybe it is a broad awareness issue where this could well be the kind of use case you present until the next one gets uncovered of this is what we need to prepare for now. This is what we need to do. And you then need to, of course, as the communicator, set out what you’re going to do that isn’t like requiring you to take a week and gather your team together to do something because that is a different thing, although that probably needs to happen too. But in your department, in your area of the business, in your work, if you’re an independent consultant, how would you address this? So the scope of this is quite worrying, I have to say.
Shel Holtz: It is, I think we’re going to see more of it. And as we see more of it, crisis communication specialists will develop some protocols for addressing it that we will in the corporate world adopt and test and refine. But it is very troubling. I mean, just within the last couple of weeks, we saw ByteDance release its video generator, C-Dance.
Neville Hobson: Okay.
Shel Holtz: And somebody created a scene of Tom Cruise and Brad Pitt having a fight on top of a building. And it’s remarkable. You cannot tell that this was not filmed.
Neville Hobson: Punch up, yeah. It’s highly credible and believable, so you’re likely to believe it.
Shel Holtz: Yeah, but—and Hollywood freaked out over this and there were all kinds of statements issued. But still, this was a human who used an AI tool to create it. What makes this story different is that there was no human behind this at all. Did you go look at Multbook while it was operational? I haven’t seen any posts on it lately.
Neville Hobson: Yes, I did. I was curious about it, so I did take a look. But I had—I had alarm bells ringing in my mind when I did. I did nothing further than just look.
Shel Holtz: Yeah. Yeah, I mean, for those who haven’t heard of Multbook, these are the bots that had been released from OpenClaw, which is what it’s called now. I think it’s gone through several name changes for a variety of reasons. It allows you to create and deploy agents as whoever deployed the agent behind this story did. You would not want to put this on your own computer.
Neville Hobson: Yeah, it has.
Shel Holtz: Very, very, very risky. Most people ran out and bought a Mac mini to run OpenClaw. But if Multbook is those agents having their own little Facebook to talk to each other without engaging with humans and they’re having actual conversations with each other—and it’s weird. Sometimes it’s funny. Sometimes it makes you roll your eyes, but this is the first of its kind, both for OpenClaw and for Multbook. Imagine where this is going to be in a couple of years and imagine what kind of damage these things can do with motivations that are not the motivations that drive the people who are causing us grief and making us implement our crisis plans now. So as I say, I think we need to start paying attention to this now, not when there are 20 false narratives out there that have been created by AI and that are spreading like wildfire.
Neville Hobson: Yeah, I think that’s going to happen no matter what, Shel, I truly believe. And indeed, looking at decoder, another aspect of the story they posted about was that whether it was a human or machine, it doesn’t matter. It worked. It deceived people. A quarter of the commenters commenting on this online believe the agent, believe the agent’s account. I think we also need to also just kind of say: But folks, bear in mind, they still don’t know. No one knows whether it really was a bot doing this or a human behind the scenes manipulating it. And I think until it’s clear, don’t have sleepless nights about this. But at the same time, listen to the thinking and in your own mind about how do you raise consciousness, you need to prepare for something that’s happening. So the question is, what do you do? That’s the big question.
Shel Holtz: Yeah, for those who are interested, Shambaugh was interviewed by Kevin Roose and Casey Newton on the New York Times Hard Fork podcast, which is a tech show. So if you’re interested in his perspective, you know, he’s a volunteer, he has a day job. And to have to be dealing with this is not something that was in mind when he accepted the position as a volunteer to review code submitted to this repository. So that’s another factor to consider.
Neville Hobson: Yeah. I read Scott Shambaugh’s post on his own blog where he kind of responded to it. The headline was “An AI agent published a hit piece on me”. And it’s long. I mean, it’s detailed. It requires force to read it all. But it’s quite extraordinary that prompted him to write this detailed account complete with charts and images and the whole ton of stuff. It’s got over 100 comments. And I think the mix from what I saw glancing: some do believe the other guy, most sympathetic to him that he was the subject of this attack. But there’s your indicator of what’s likely to happen to others. And this is not like some celebrity or some guys in the news all the time. This is a developer. And as you said, he’s a volunteer doing this who is subject to this attack. And I think it’s a sign of the times, basically.
What a story, Shel. So let’s move on to our next story, which is—this is still the AI continuance. We haven’t got to the non-AI stories yet. This one though was in the news quite a bit in the past few days regarding Accenture, the big—the big four consulting firm. To put it in context over the past few months, we’ve talked a lot about AI adoption. This story takes that conversation in a much sharper direction. So a number of media—I saw in particular the Financial Times and the Times here in the UK reporting that Accenture had begun monitoring how often some senior employees log into Accenture’s internal AI system. And that “regular adoption” will now be a visible input into leadership. In other words, if you want to make Managing Director at Accenture, your AI logins now matter.
This isn’t just encouragement. It’s measurable behavioral enforcement. That’s my take on it. The company says it wants to be the reinvention partner of choice for clients. Its share price is down more than 40% over the past year. And its CEO has previously said staff unable to adapt to the AI age would be “exited”. So this move sits at the intersection of technology, performance management, and commercial pressure. The reaction is telling though: in the Times comments, many readers argue that logins measure activity, not impact. Some describe it as corporate panic. Others question whether this justifies expensive AI investments.
On LinkedIn, the debate is much more nuanced, but still skeptical. In a post by James Ransom, readers are asking whether counting tool usage measures capability or simply compliance. One commenter put it neatly: “Clients pay for the house we build, not for how many times we touch the saw”. And there’s a deep tension here. Junior staff may adopt AI fastest, but senior leaders are the ones expected to exercise judgment. So what exactly are we rewarding? Experimentation, fluency, governance, or visibility? This isn’t just about Accenture though, it raises a broader question for organizations everywhere. When AI becomes part of performance criteria, are we measuring meaningful transformation or just digital theater? When AI becomes part of the promotion algorithm, are we rewarding genuine leadership capability or are we just counting digital footprints and calling it progress? Your thoughts, Shel.
Shel Holtz: I have a lot of thoughts on this. I have read a number of items on this. In fact, it was on my list of stories to include. And when you included it, it left me free to pick other stories. But I need more information from Accenture on this. First of all, have they added the use of AI to job descriptions and to promotion criteria? Or did they just issue a memo saying that this is what we’re going to do? If they have made it clear to everybody that this is an expectation of the organization, then I am less troubled by it—not untroubled, but less troubled than if it is not in job descriptions.
Neville Hobson: So to your point, by the way, according to the Financial Times, they saw a memo—like literally an email about this. So that seems to be how it was communicated.
Shel Holtz: I’d still want to go into their HRIS and see if their job descriptions have been updated. Obviously, we don’t have access to their HRIS, but I’d be very curious to know if it’s in the job descriptions for those senior people. The next thing is: have people received job-level training? And by job-level training, I mean, have they been trained on how to use AI to do the things that they do in their jobs? Not how to write a good prompt, not how to access these things. Across the board, generic training for every employee is fairly useless when it comes to AI. It needs to be task-level, position-level training. Have they done that?
If the expectation is that we expect you to log into the AI tools, even though we haven’t provided you with the training on what to do with it once you’ve opened it, that would be troubling, but I don’t know. Generally, organizations are struggling with adoption. It’s getting better. It seems to be getting better organically as employees slowly adopt it—maybe in their personal lives and then see the utility at work. Could be that they find one thing to do with it at work. Maybe somebody else at work told them, “Hey, this is what I did,” and you go, “Wow, I can do that. That would be great”. But it seems to be largely organic, the adoption in the workplace.
But companies do want their employees using these tools. They’re making tremendous investments in them. And whether this is the approach to take to get employees to adopt—again, I think it depends on whether the training is there and whether this has been woven into systems or if it was just a missive that was sent out to employees as a one-off without communications jumping into the breach to say: Here’s why, here’s where you can go get the training, here are resources that are available, here’s how our leaders are using it. By the way, that’s a big deal in adoption rates: in the organizations where leaders are transparent about how they’re using it, employee adoption tends to really take off because, first of all, leaders are leading by example. Second, employees are getting a taste of what people can do with this. And third, it’s explicit permission to use this for a lot of people who are worried about being seen as cheating or “Gee, do we really need you here if you can do your job with AI?” When you see your leaders doing it, if they can do it, I can do it. So this adoption is important. I’m not sure this is the approach to take, but I would need more information before I could render a final judgment.
Neville Hobson: Well, yeah, I think I had a memory about this. I’m sure we discussed this in an episode of For Immediate Release last year: that Accenture’s rolled out a corporate AI training program that’s designed to—from what I’m reading here—reskill the entire global employee base of 700,000 employees.
Shel Holtz: I think we did, yeah. I worry about that. That sounds generic to me.
Neville Hobson: So they’re training the entire workforce on agentic AI systems, according to this article, that follows what the CEO, Julie Sweet, announced the initiative during a Bloomberg interview. It’s an expansion of the company’s earlier program that prepared half a million staff members for generative AI work. So I think that would answer your concern that—the detail we don’t have, but whatever it is, they didn’t just send a memo saying, “We’re going to check you out”. This is part of a huge program that’d be running for a year at least. Don’t know the details.
Shel Holtz: Right, but… But it does sound like it’s