PLAY PODCASTS
AI and the Writing Profession with Josh Bernoff

AI and the Writing Profession with Josh Bernoff

For Immediate Release · Neville Hobson and Shel Holtz

December 10, 202558m 6s

Audio is streamed directly from the publisher (traffic.libsyn.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

Josh Bernoff has just completed the largest survey yet of writers and AI – nearly 1,500 respondents across journalism, communication, publishing, and fiction.

We interviewed Josh for this podcast in early December 2025. What emerges from both the data and our conversation is not a single, simple story, but a deep divide.

Writers who actively use AI increasingly see it as a powerful productivity tool. They research faster, brainstorm more effectively, build outlines more quickly, and free themselves up to focus on the work only humans can do well – judgement, originality, voice, and storytelling. The most advanced users report not only higher output, but improvements in quality and, in many cases, higher income.

Non-users experience something very different.

For many non-users, AI feels unethical, environmentally harmful, creatively hollow, and a direct threat to their livelihoods. The emotional language used by some respondents in Josh’s survey reflects just how personal and existential these fears have become.

And yet, across both camps, there is striking agreement on key risks. Writers on all sides are concerned about hallucinations and factual errors, copyright and training data, and the growing volume of bland, generic “AI slop” that now floods digital channels.

In our conversation, Josh argues that the real story is not one of wholesale replacement, but of re-sorting. AI is not eliminating writers outright. It is separating those who adapt from those who resist – and in the process reshaping what it now means to be a trusted communicator, editor, and storyteller.

Key Highlights

  • Why hands-on AI users report higher productivity and quality, while non-users feel an existential threat
  • How AI is now embedded in research, brainstorming, outlining, and verification – not just text generation
  • Why PR and communications teams are adopting faster than journalists
  • What the rise of “AI slop” means for trust, originality, and attention
  • Why the future of writing is not replacement – but re-sorting

About our Conversation Partner

Josh BernoffJosh Bernoff is an expert on business books and how they can propel thinkers to prominence. Books he has written or collaborated on have generated over $20 million for their authors.

More than 50 authors have endorsed Josh’s Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters, a comprehensive guide for business authors. His other books include Writing Without Bullshit: Boost Your Career by Saying What You Mean and the Business Week bestseller Groundswell: Winning in a World Transformed by Social Technologies. He has contributed to 50 nonfiction book projects.

Josh’s mathematical and statistical background includes three years of study in the Ph.D. program in mathematics at MIT. As a Senior Vice President at Forrester Research, he created Technographics, a consumer survey methodology, which is still in use more than 20 years later. Josh has advised, consulted on, and written about more than 20 large-scale consumer surveys.

Josh writes and posts daily at Bernoff.com, a blog that has attracted more than 4 million views. He lives in Portland, Maine, with his wife, an artist.

Follow Josh on LinkedIn: https://www.linkedin.com/in/joshbernoff/

Relevant Links

Audio Transcript

Shel Holtz

Hi everybody, and welcome to a For Immediate Release interview. I’m Shel Holtz.

Neville Hobson

And I’m Neville Hobson.

Shel Holtz

And we are here today with Josh Bernoff. I’ve known Josh since the early SNCR days. Josh is a prolific author, professional writer, mostly of business material. But Josh, I’m gonna ask you to share some background on yourself.

Josh Bernoff

Okay, thanks. What people need to know about me, I spent four years in the startup business and 20 years as an analyst at Forrester Research. Since that time, which was in 2015, I have been focused almost exclusively on the needs of authors, professional business authors. So I work with them as a coach, writer, ghostwriter, an editor, and basically anything they need to do to get business books published.

The other thing that’s sort of relevant in this case is that while I was at Forrester, I originated their survey methodology, which is called Technographics. And I have a statistics background, a math background, so fielding surveys and analysing them and writing reports about them is a very comfortable and familiar place for me to be. So when the opportunity arose to write about a survey of authors in AI, said, all right, I’m in, let’s do this.

Shel Holtz

And you’ve also published your own books. I’ve read your most recent one, How to Write a Better Business Book.

Josh Bernoff

Mm-hmm, yes. So, this is like, the host has to prod you to promote your own stuff. Yes. Yes. So by my two most recent books, I wrote a book called Writing Without Bullshit, which is basically a, a manifesto for, people in corporations to write better. and I wrote build a better business book that you talked about, which is a complete manual for everything you need to do to think about conceive. write, get published and promote a business book. Yeah, so they’re both both available online where your audience can find them.

Shel Holtz

Wherever books are sold. So we’re here today, Josh, to talk about that survey of writers that you conducted, asking them about their use of AI. What motivated you to undertake this survey in the first place?

Josh Bernoff

Well, I’ll just go back a tiny little bit. About two years ago, Dan Gerstein, who is the CEO of Gotham Ghost Readers and a really fantastically interesting guy, reached out to me because he knew my background of doing statistics and said, let’s do a survey of the ROI of business books, get business authors to talk about what they went through to create their business books and whether they made a profit from all the things that followed on that.

So at the conclusion of that project, which people can certainly still get access to that information, at authorroi.com, at the conclusion of that project, it was clear that we could do a really good job together. So when he came to me and said, let’s do a survey about authors and AI. It’s a topic I’ve been researching a lot, talking to many authors about how they use it. And I said, all right, yeah, let’s actually get a definitive result here. And we were really pleased that the survey basically went viral.

We got almost 1,500 responses, way more than we did for the business author survey, because there’s a lot more writers than authors in the world. And because we got such a large response, it was possible to slice that so I can answer questions like how do technical writers feel about AI or is this different between men and women or older or younger people. And so that enabled us to do a really robust survey which people can download if they want. It’s at gothamghostwriters.com/AI-writer, available free for anyone who wants to see it.

Shel Holtz

And we’ll have that link in the show notes as well.

Josh Bernoff

Okay, great.

Neville Hobson

It’s a massive piece of work you did, Josh. I, I kind of went through the PDF quite closely because it’s a topic that interests me quite a bit. And I was really quite intrigued by many of the findings that it surfaced. But I have a fundamental question right at the very beginning, because I’m a writer myself. But I encountered this phrase throughout, “professional writer.” I’m not a professional writer, but I’m a writer.

And I know a lot of communicators who would say, yeah, I’m a professional writer. I don’t think it fits the definition you’re working to. So can you actually succinctly say what is a professional writer as opposed to any other kind of writer that communicators might say they are? What’s the difference?

Josh Bernoff

Yeah, that’s there’s less there than meets the eye and I will describe why.

So, we fielded this survey, and we basically said if you are a writer, you can answer this survey, and we got help from all sorts of people who are willing to share it within their communities. So over 2000 people responded. But of course, you have to disqualify people if they’re not really a writer and the way we define that is, we said, you spend at least 10 hours a week on writing and editing? And somebody who didn’t, I’m like, okay, you’re not really a writer if you don’t spend at least 10 hours a week on it.

And we also looked at how people made their living. So let’s just say you’re a product manager. You’re probably doing a lot of writing, but you wouldn’t describe yourself as a professional writer. So part of what we did was to have people answer questions about what kind of writer are you?

And we had the main categories and we captured almost everybody in them, know, marketing writers, nonfiction authors, ghost writers, you know, PR writers and so on. And although we had not intended to do so, we got almost 300 responses from fiction authors. And we were like, okay, what are we going to do here? Because these people are very different from the people who are writing in a business context or non-fiction authors, but I don’t want to invalidate their experience.

So we basically divided up the survey and we said, most of the responses are from people who are writing things that are intended to be true. And a small group is written from people who are intentionally lying because they’re fiction writers. So then we had an ongoing discussion about what do we call the people who write things that are intended to be true. And Dan Gerstein and I eventually agreed to call them professional writers, which is not a dig on the professional fiction authors, but it’s just a catchall for people who are making their living as a writer and writing nonfiction.

Shel Holtz

Josh, you described in the survey report a deep attitudinal divide where users see productivity and non-users see what you called a sociopathic plagiarism machine.

Josh Bernoff

Thanks. Now, now, wait a minute. I didn’t call it that. One of the people who took the survey called it that. Yes, that was a direct quote. I mean, I just want to comment here that in the survey business, we call responses to open-ended questions verbatims, right? So these are the actual text responses. And because we surveyed writers, these are the best verbatims I’ve ever seen. This is extremely literate.

Shel Holtz

OK, that was, that was a response. Got it. Well, yeah.

Josh Bernoff A collection of people expressing their opinion and the sociopathic plagiarism machine came from one of those folks. Yes.

Shel Holtz

I did like that a lot. But for somebody like me, a communications director managing a team, how do you bridge that gap when half the team might be ethically opposed to the tools that the other half is enthusiastically using every day?

Josh Bernoff

You just tell the other people to go to hell. No, I’m kidding! Now this is, it’s true. So one of the most notable findings of the survey was that people who do not use AI are likely to have negative attitudes about it. So it’s not just like, you know, well, I don’t happen to drink alcohol, but it’s fine with me. No, these people are.

Josh Bernoff

This is bad for the environment. It’s an evil product. There were a lot of interesting verbatims in the survey from people like that. 61% of the professional writers said that they use AI. So this is a minority of people who are not using it, and an even smaller group who are opposed to it. But they are fervently opposed to it. The people who do use it are generally getting really useful things done. A majority say that it’s making them more productive. And the people who are most advanced are doing all sorts of things with it.

By the way, this is really important to note. The thing that everyone’s sort of morally up in arms about, which is people generating text that’s intended to be read using AI, is actually quite rare. Most of the, that was only 7% that did that and only 1% that did that daily. So most people are doing research or they’re, they’re, you know, using it as a thesaurus or, or, using it to analyse material that they find and, and are citing as own background or something like that. it, to come directly at your question though, it is important to acknowledge this divide in any writing organisation.

And I think that the people who are using AI need to understand that there are some serious objections and they need to address that. The people who are not using it, I think, need to understand that perhaps they should be trying this out just so that they’re not operating from a position of ignorance about what the thing can do.

And I think most importantly, the big companies that are creating AI tools need to be a lot more serious about compensating the folks who create the writing that it’s trained on because it is putting the sociopathic plagiarism machine aside, it’s pretty bothersome when you find out that the thing has absorbed your book and is giving people advice based on that and you got no compensation for that.

Shel Holtz

I just want to follow up on this question real quickly. Were you able to quantify among the people who don’t use it and object to it the reasons? I mean, you listed a couple, but I’m wondering if there’s any data around the percentage that are concerned about the environment, the percentage that, I mean, the one that I keep reading in LinkedIn posts is it has no human experience or empathy, which I don’t understand why that’s a requirement for say earnings releases or welcome to our new sales VP, but nevertheless.

Josh Bernoff

Yeah, I going to say that describes a bunch of human writers too. They don’t seem to have any empathy. So we looked at one of the questions that we asked is how concerned are you about the following? And then we had a list of concerns. And it’s interesting that they divide pretty neatly into things that everyone is concerned about and things that the non-users are far more concerned about. So for example, the top thing that people were concerned about was, and I quote, AI-generated text can include factual errors or hallucinations. So even the people who use it are like, okay, we’ve got to be careful with this thing because sometimes it comes up with false information.

For example, if you ask it for my bio, it will tell you that I have a bachelor’s degree in classics from Harvard University and an MBA from the Harvard Business School, and I’ve never attended Harvard. So it’s like, no, no, no, no, no, no, that’s not right!

On the other hand, there are some other things where there’s a very strong difference of opinion. So for example, question, AI generated text is eroding the perception of value and expertise that experienced writers bring to a project. 92% of the non-users of AI agreed with that, but only 53% of the heaviest users of AI agreed with that. So if you use AI a lot, it’s like, well, actually, this isn’t as big of a problem as people think.

The environmental question, I think that non-users, 85% of them were concerned about its use of resources, but only 52% of the heavy users were concerned about that. And I want to point out something which I think is probably the most interesting division here. If you ask writers, should AI-generated text be labelled as such, they all mostly agree that it should. But if you ask them, should text generated with the aid of AI be labelled as such, the people who use AI often think, well, you don’t need to know that I used it to do research, because it’s not visible in the output. Whereas the non-users are like, no, you used AI, you have to label it. So that’s a good example of a place where the difference of opinion is going to have to somehow get settled over time.

Neville Hobson

That’s probably one of those things I would say take a while to do that, given what you see. and I talked about this recently on verification. Some people, and I know some people who are very, very heavy users of AI who don’t check stuff that is output with the aid of their AI companion. That’s crazy, frankly, because as Shel noted in our conversation, the latest episode of FIR podcast, your reputation is the one that’s going to suffer when it’s when you get found out that you’ve done this and haven’t disclosed it.

But it also manifests itself in something, you know, the great em-dash debate that went on for most of this year. Right. But I wrote a post about a couple of weeks ago about this and about ChatGPT’s plan saying you can tell it not to use em-dashes.

And my experience is I’ve done that and it still goes ahead and does it. It apologizes each time, it still goes ahead and does it, you know. But you know what? That post produced a credible reaction from people. 40,000 views in a couple of days. That’s for me, that’s a lot, frankly. And I did an analysis, which I published just a few days ago, that showed the opinions people have about it are widely divisive.

Some see it as, I’m not going to give up my whole heritage of writing just because of this stupid argument to others who say you’ve got to stop it because it doesn’t matter if it got it from us in the first place, it signals that you’re using AI, therefore your writing is no good. That kind of discussion was going on. So I’d see this is continuing. It’s crazy. looking at the data highlights, there’s some really fascinating stuff in there, Josh, that caught my eye.

The headline starting with the right to see AI is both a tool and a threat. And yes, that’s quite clear from what you’ve been saying, but also this hallucinations concern 91% of writers. And I think that’s true across, you no matter how experienced you are, it concerns me, which is why I’m strongly motivated to check everything, even though sometimes you think, God, do it, don’t don’t question, just do it.

I reviewed something recently that had 60 plus URLs mentioned in it. And so I checked them all, and 15 of them just didn’t exist or 404s or server errors. And yet the client had issued it already and without checking that kind of thing. Stuff like that. So you’ve got a job to educate them.

So I guess this is all peripheral to the question I wanted to ask you, which is that correlation that comes across in the data highlights between AI usage and positive attitudes towards it and as opposed to the negative attitudes, but the users are very highly positive.

How should we interpret this divide? I guess is the question you may have touched on this already, actually, I think you may have actually, is it just a skills gap? Is it a cultural gap? Or what is it? Because the attitudes that are different, I guess, like much these days seems to me to be quite polarised, strong opinions, pro and con. How do we interpret this?

Josh Bernoff

All right, so I want to go back to a few of the things that you said here. I have some advice in my book, Build a Better Business book, and it’s generally good advice about checking facts that you find, finding false information on the internet has always been a problem for people who are citing sources.

There used to be a guy called the numbers guy in the, Carl Bialik in the Wall Street Journal, who would actually write a column every month about some made up statistic that got into print. All that we’ve done is to make it much more efficient. But people do need to check. And it’s interesting. You learn when you use these tools that it’s subtle. If you click and say, OK, that is a real source, that’s fine.

But often, it will tell you that that source says X or Y and then you go and you read it and you’re like, no, it doesn’t actually say that. So yes, you are now citing a source that when you go look at it says the opposite of what you thought it said. Real professional writers know that that is an important part of their job and it just happens to be easy to behave incompetently and irresponsibly now.

But believe me, I deal with professional publishers all the time and there are all these clauses now in their contracts which basically say you have to disclose when you’re using AI and if there’s false information in here then you’re responsible for it and we might not publish it. I will say this, so let’s just put this in a different context. So think about Photoshop.

Okay, when Photoshop started to become popular, people were like, wait a minute, we can’t believe what we see in pictures. Maybe the person doesn’t have skin that’s all that smooth. Maybe that background is fake. But in context where you’re supposed to be doing factual stuff, like a photo that’s in a magazine, there’s safeguards against this and the users have learned what is legit and what isn’t. And I think also that the readers have learned that, okay, we have to be a little skeptical about what we see. This AI has made it possible to do that with text way more easily, but it’s still the case that you, as a reader, you need to be skeptical and as a user, you need to be sophisticated about what you can and can’t do and what is and is not legit.

I do these writing workshops with corporations. I’m doing one next week with a very large media company. And I’m trying to help them to understand, start with clear writing principles and use AI to support them as opposed to use it to substitute for your judgment, generate crap, and then do a disservice to the poor people who are reading it.

Shel Holtz

I am always amused when I see people expressing such angst over AI generated images taking money from artists. And I didn’t hear the same level of anxiety when CGI became the means of making animated movies. What happened to the people who inked the cells? They’re out of a job. No, Pixar got nothing but praise.

Josh Bernoff

Yeah, I know. Right. Right. They should. Yes, yes Yes, right and it’s like no no, they should have actually gotten 26,000 dinosaurs in that scene and I’m like You you were entertained admit it and you know that they’re not real and that’s it…

Shel Holtz

Yeah. Josh, your data shows that thought leadership writers and PR and comms professionals are the heaviest users of AI. Thought leadership writers, 84% of them and 73% of PR and comms professionals are using AI in their writing. Journalists are somewhere around half of that at 44%.

Did you glean any insights as to why the people who are pitching the media are using this more than the people being pitched?

Josh Bernoff

I have some theories about that. What I’m about to tell you is not supported by the data, although I could go in and start digging around. There’s infinite insight in here if I do that. So I think journalists are a little paranoid about it. And the fact that, yes, 44% of the journalists said that they used it, but only 18% said that they used it every day, which is at the very bottom of all the professional writers.

So I think they are not only concerned about their livelihood, but also that they don’t wanna make a mistake. They don’t wanna get anything into print that’s false. Whereas if you look at the thought leadership writers and the PR and comms professionals, it’s a simple question of volume. These people are under pressure to produce a very large amount of information.

And I can tell you as a professional writer that that there are certain tasks that you really would rather not spend time on if an AI can do it. So if you’re gathering up a bunch of background information and perplexity does a better job on contextual searches than Google, which it absolutely does, then you’re probably going to use it.

Now, there is the risk that these people are basically generating large quantities of crap and then sharing it. But I think that that rapidly becomes unproductive. If you’re basically spamming people with AI slop, then they will immediately become sort of immune to that, and then you lose trust and at that point you’ve destroyed your own livelihood.

Neville Hobson

Yeah, absolutely. I want to ask you about one of the other finds you had in here about ChatGPT is the clear leader amongst all writers. 76% using it weekly. I use ChatGPT more than any other tool. I’m very happy with it. It does what I want. But in light of how fast things move in this industry, how things change. How do you see that shifting or does it not actually matter at the end of the day which tool you use as long as it delivers what you want from it?

Josh Bernoff

Well, what you have here is people spending hundreds of millions of dollars to become the default choice, the sort of dominant company here. And if you look at past battles of this kind to be like, who is the top browser or what’s the top mobile operating system, this is a land grab.

If you sit out and wait and see what happens, you could very easily end up on the sidelines, which is why there’s so much money flooding into this. ChatGPT definitely has an early lead, but there was an article in the Wall Street Journal yesterday, I believe, about the fact that they’re very concerned about Google. And the reason is on a sort of features and capability basis, Google is Google better?

It depends on what day it is, they keep making advances. But it does integrate with people’s basic use of Google in other ways, and for example, use of Google in email. And wait a minute, have we never heard this story before? Where a company that has a dominant position in one area attempts to leverage it in another area? Gee, that’s like the whole story of the tech industry for the last 30 years!

Josh Bernoff

The same is true, my daughter works in a company that uses Microsoft products, which is very common. And so everybody in that company is using Microsoft Copilot because they got it for free. There’s this, if you ask me who is going to have the top market share in 18 months, I have no clue, but I don’t think that ChatGPT is necessarily in a position to say, ours is clearly better than everybody else and so everyone will use what we have.

I will point out that the, I’m trying to remember if I have the number on this, but the average person who is using these tools in a sophisticated way is typically using at least three or four different tools. So just like you might use Perplexity for one web search and Google for another, you might decide to use Microsoft Copilot in some situations and use Google Gemini in another situation.

Neville Hobson

It’s interesting that because I started using Copilot recently through a change of how I’m doing something for one particular area of work I’m interested in. And it blew me away because I’m using Copilot, it’s using ChatGPT5. So and I see, I sense the output I get from the input I give it is in a similar style to what ChatGPT would write.

So I’m impressed with that and I haven’t gained any further significance to it. Maybe it’s coincidental, but I quite like that. So that’s actually getting me more accustomed to Microsoft’s product. So these little things, maybe this is how it’s all going to work in the end.

Josh Bernoff

Yeah, yeah, I will point out that professional writers that I talked to are very enamoured of Claude as far as the creation of text. And definitely if you’re doing a web search, Perplexity has got some pretty superior features for that. I find myself often using telling ChatGPT, don’t show me anything unless you can provide a link, because I’m not going to trust you until you do that. And I’m going to check that link and see what it really says.

So that’s, you know, the, the, the development of specialised tools for specialised purposes is absolutely going to continue here.

Shel Holtz

Yeah, I’ve been using Gemini almost exclusively since 3.0 dropped. I find it’s just exponentially better, but I’m sure that when ChatGPT releases their next model, I’ll be back to that. In the meantime, I did see Chris Penn commenting, I think it was just yesterday on that Wall Street Journal article pointing out that it’s baked into Google Docs and Google Sheets and all the Google products, whereas OpenAI doesn’t have any products to bake it into.

And that’s a clear advantage to Google. But Josh, you revealed in the research that 82% of non-users worry that AI is contributing to bland and boring writing. What I found interesting was that 63% of advanced users felt the same way, that it’s creating this AI slop.

So as a counsellor to writers, how would you counsel people, our audience is organisational communicators. So I’ll say, would you counsel organisational communicators? When cutting through the noise is vital, you need to get your audience. I deal mostly with employee communication, and we need employees to pay attention to this message, despite the fact that there are so many competing things out there, just clamouring for their attention. How do you avoid the trap of this bland and boring writing when you’re so desperate to cut through that clutter and capture that attention?

Josh Bernoff

Yes, well, large language models create bad writing far more efficiently than any tool we’ve ever had before. So, and of course, I’m talking to both corporate writers and professional authors all the time about this. And so basically, the general advice is that the more you can use this for things behind the scenes, the better off you are and the more you use it to actually generate text that people read, the worse off you are.

I’m gonna give you a very clear example. So I am currently collaborating with a co-writer on a book about startups for a brilliant, brilliant author who really knows everything about startups, has an enormous background on it. And he has insisted that I use AI for all sorts of tasks. In fact, he’s like, you know, why are you wasting your time when you could just send this thing off and tell it to do the research? And we’ve done some spectacular things like I had a list of startups and I told it to go out on the internet and get me a simple statement about who they are, what financing stage they’re in, what category they’re in.

And it goes off and it does that. That would have taken me days. But because this guy is intelligent, there’s a reason he’s hired me and not replaced me with AI because once it’s time to actually create something that’s gonna be read by people, we have to rewrite that from beginning to end. That’s, as a professional writer, that is my, how I make a living. And what I write is the complete opposite of bland and boring. And he doesn’t want bland and boring. He wants punchy and surprising and… insightful.

So I, you know, you can both say use AI for all of this other stuff and don’t you dare publish anything that it creates. and I feel like that is generally the right advice that everybody is going to end up where I have ended up, which is, even in a corporate environment, it can support you, but you’re not using it to generate texts that people are going to actually read.

Neville Hobson

It’s a really good point you’ve made there I think because one of the elements one of the findings in the survey report, AI powered writers are sure they’re more productive and I definitely sit in that category. I’m absolutely convinced I’m probably in that what is it 92% or whatever it is of the advanced users who think so how do I prove it?

Well it’s not so much the output it’s the quality. It kind of tunes your mind into some of the reports that you read or what others are saying elsewhere that use AI tools to support you in doing the stuff that is what AI is better at than humans. Unstructured structured data, whatever it is, finding patterns, all that stuff that we can all read about. And you do the intellectual stuff, the stuff humans are really good at.

Josh Bernoff

Absolutely.

Neville Hobson

And they sound great phrases and sentences. And I’ve said to lots of people, I don’t see too many people doing that. So they’re obviously not in the advanced stage, let’s say. I find it hard to believe, frankly. Really I do. In conversations I’ve had during this year on those who diss this, who say this is like some of your respondents have said, you know, it’s the, what is it, psychotic plagiarism machine or whatever it was, the stuff…

Josh Bernoff

Sociopathic, but yes.

Shel Holtz

Both things can be true.

Neville Hobson

…sorry, sociopathic, but it’s where they can, but it amazes me, it truly does. And I think if we’ve got this situation where clearly there is evidence that if you use this in an effective way, it will help you be productive.

It will augment your own intelligence, to use a favourite phrase of mine. So AI is augmenting intelligence, not artificial. And yet that still encounters brick walls and pushbacks on a scale that’s ridiculous. Worse in an organization when that’s at a leadership level, I would say.

So how do we kind of make this less of a threat as it’s seen by others, or is this part of the issue that those naysayers just see all this as a massive threat?

Josh Bernoff

Well, boy, that’s a deep question. So first of all, I always start with the data here, because I want to distinguish between my opinions and the data. And the data says that the more you use AI, the more likely you are to say that it is making you more productive. And as you said, 92% of the advanced users said that it made them more productive. And interestingly, 59% of the advanced users said that it actually made the quality of their writing better.

So it’s not just producing more, but producing better stuff. And one more statistics here. We actually asked them how much more productive. The average across all the writers who use it is 37 % more productive, but like any tool, you need to get adept at it and learn what it’s good at and what you can use it for. And this technology has advanced way, way ahead of the, the learning about how to use it.

So there has to be a, basically a movement in every company and all writing organizations to teach people the best way to take advantage of it and what not to do. And in fact, one of the things that I recommended and that I tell some of the corporate clients I work with is find the people who are really good at this and then have them train the other people.

Because there’s nothing better than somebody saying, okay, here, let me show you what I can do with this.

I’ll just give you an example. So this report itself, obviously people are saying, well, did you use AI to write the report? I started out trying to use AI to analyse the data and I found that it was not dependable. I’m like, okay, I’m gonna have to calculate these statistics the old-fashioned way with spreadsheets and data tools. Every single word of the report was written by a human, me, at least most people still think I’m a human.

But we had, you know, thousands of verbatims to go through. And the person to whom I delegated the task of finding the most interesting verbatim used AI to go in and find verbatims that were interesting, had certain, there were some positive ones, negative ones, you know, had some diversity in terms of who they were from. So we weren’t quoting all technical writers. And that’s a perfect use to go into a huge corpus of text and pull out some of the interesting things out of there becau