Are human copywriters obsolete in the age of ChatGPT?

21 February 2023
7 minutes

When OpenAI recently launched ChatGPT, it felt like a turning point. With a single tool, you can instantly create a blog post, article or white paper—without having to hire a copywriter. ChatGPT makes it possible for anyone to quickly create text for free. So, does this mean the end of human copywriters?

The technology behind ChatGPT is truly impressive. The first time you try it, you’re almost certain to be amazed by the speed and quality. It’s easy to see why the tool is causing such excitement among marketers. For many of them, it’s like a dream come true.

The mockery machine

A recent article in The Guardian reported how Australian singer Nick Cave responded to a set of ‘Nick Cave-style’ lyrics produced by ChatGPT. The lyrics were sent to him by a fan, who wasn’t the first person to have that idea. In fact, Cave said he’d already received ‘dozens’ of AI-generated lyrics in recent weeks, proving just how widespread the ChatGPT craze has become.

Cave’s evaluation of the bot’s lyrical abilities was less than enthusiastic. He called it ‘a grotesque mockery of what it is to be human.’ 

‘Writing a good song is not mimicry, or replication, or pastiche, it is the opposite,’ Cave wrote in response in his newsletter.

Mimicry, replication and pastiche… Leave it to a true wordsmith like Nick Cave to find the perfect way to summarise what ChatGPT does.

An automated parrot

ChatGPT is basically an auto-complete tool on steroids. It knows how to finish your sentences based on what you type in. Give it a few lines and it can flesh out the rest of the story.

According to Jan Scholtes, a professor of text-mining at Maastricht University, the tool simply parrots what it has already learned: ‘The GPT models merely repeat statistical sequences of human language and other internet content to which they have been exposed during training,’ he explains.

Quote of Jan Scholtes about the use of GPT models

ChatGPT falls short on facts

An auto-complete machine may be able to create readable texts. They’re just lacking the human touch (we’ll get to that in a minute). A bigger problem is that ChatGPT can’t tell the difference between truth and fiction. ChatGPT’s creator, Open AI even admits this: ‘ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,’ they say.

That’s actually putting it mildly. ChatGPT presents information so convincingly that you probably won’t even notice that it’s untrue.

Scientists at Northwestern University in Chicago prompted ChatGPT to create fake excerpts from scientific research papers. They then asked other scientists to rate the authenticity of those texts, and they ran the texts through a plagiarism checker. The results were disturbing:

‘The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% of the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.’

It seems like we may have a problem on our hands when experts can no longer distinguish between what’s true and what isn’t. If a scientist can no longer tell fact from fiction, can we expect an average consumer to do that? 

Anyone can file a petition

Tech expert Jerry Fishenden shows how ChatGPT confidently gives a completely incorrect answer to the fairly simple question ‘Who can petition the UK governmen’s e-petition system?’ The bot’s answer seems so convincing that you’d almost believe it without thinking twice. But that’s not even the biggest problem, says Fishenden: 

‘At best it will apologise for getting it wrong. But it doesn’t learn from getting it wrong. The next person is likely to get the same wrong or misleading answer and may well walk away believing it’s the truth rather than a lie.’

Image of an prompt to ChatGPT
Source: Blog Jerry Fishenden

So, what does this mean for marketers? It’s true that some types of marketing content are not heavily fact-driven. But at more advanced stages of the customer journey, content becomes increasingly specific. It takes true insight and factual information to add value for your audience.

Suppose you’re a marketer at a legal consulting firm and you need to write a blog post on employment law. Your text has to be flawless. Publishing content with factual errors hurts your reputation and makes your clients question your professionalism. So, do you think you can rely on ChatGPT to deliver the quality of content you need?

Legal professions have nothing to worry about

Any time a new tech hype occurs, we hear lots of scary predictions about how many professions it will make redundant. The rise of AI chatbots like ChatGPT has caused some people to question whether there will still be a need for human legal professionals in the future. Can’t an AI bot answer all your legal questions, citing the exact legislation?

It turns out that those fears are totally unrealistic, at least for now. The law firm LinkLaters tested the quality of ChatGPT’s legal advice by asking it 50 questions. After that, a team of three lawyers evaluated the answers. Although a few of the answers were perfect, most were incomplete or simply all wrong. The lawyers concluded that ChatGPT is generally a ‘poor’ legal advisor. 

To make matters worse, it’s hard to tell when ChatGPT isn’t telling the truth: ‘Sometimes the right-sounding answers are completely wrong, so relying on them is dangerous,’ the lawyers said.

The limitations of AI

The online consumer tech magazine CNET has also concluded that it’s dangerous to trust AI texts without carefully reviewing them. It wrote 78 articles featuring AI. Many of those contained factual errors

For example, one story about interest income boldly stated: ‘If you deposit $10,000 into a savings account that earns 3 [per cent] interest compounding annually, you’ll earn $10,300 at the end of the first year.’ Sounds good but, sadly, it’s completely untrue. At a 3% interest rate, you’d earn $300 in a year—not $10,300. 

Errors in other articles show that AI doesn’t understand how mortgages and other loans are paid off. 

AI is no substitute for human intelligence

ChatGPT reproduces text and even imitates creativity in a way that seems surprisingly human at times. It’s also been carefully designed to act ethically.

But if you look more closely, it’s easy to see that the human quality is very thin. ChatGPT’s ethical conscience, for instance, is easy to trick, as this thread on Twitter shows. All it takes is the right prompt and ChatGPT easily produces unethical text.

It also soon becomes obvious that ChatGPT is not really creative. Sure, you can ask it to imitate a Shakespeare sonnet, and it can. But what do you get? A bad imitation of Shakespeare. It’s completely lacking in originality, which is one of the true signs of human creativity.

ChatGPT is like someone who learned to paint by watching hours and hours of Bob Ross videos. It produces pretty paintings, with all the right shapes and colours. But it’s not original. And it will never hang in a museum.

An AI generated image of a man drawing a painting

Recognisable texts

Once you’ve used ChatGPT to create a few texts, you’ll start to notice a pattern. The texts all have the same dull structure. The sentences are usually too long. Sometimes the grammar is simply mangled. The statements are superficial and overly general. The style is impersonal and dry. 

These limitations are unsurprising when you remember that ChatGPT is just doing what it’s been trained to do. That’s why it follows predictable patterns and usually just comes up with slight variations on the same text.

What’s missing is the depth of human intelligence. There’s never an unexpected angle or creative twist. ChatGPT won’t pick the perfect quote from a recent article to highlight a point. And it doesn’t have any relatable anecdotes that it picked up at the bus stop. Yet it’s precisely that creativity and spontaneity that makes human storytelling so irresistible and engaging to readers.

Image of a prompt of OpenAI
Chat GPT has no knowledge of the world after 2021. Source: lablab.ai

Google prioritises human texts

Google has made very clear that it prefers content written by people for people. If you use ChatGPT to write landing pages, articles and blog posts, your content is likely to rank lower in the search results than high-quality text with the human touch. The AI-generated content lacks human features, and Google can tell the difference.

According to Google’s Danny Sullivan, the search engine has nothing against AI-generated content in and of itself. The problem is simply that AI-generated content is not as good.

A tweet of a person about generating AI content

So, even in the age of ChatGPT: ranking high in Google means creating content that answers real people’s search questions.

Look! It’s working!

Only time will tell how successful your company will be if you’re using ChatGPT for SEO purposes. Judging from the examples of BankRate.com and CreditCards.com, you might go pretty far. 

Both these companies have relied heavily on AI to improve their SEO—but always in collaboration with human content creators. The articles are ‘generated using automated technology and thoroughly edited and fact-checked by an editor on our editorial staff.’

The content seems to be performing well for now. The reason, according to Johannes Beus, CEO of Sistrix, an SEO software company, is that Google has been caught off-guard by the rapid development of AI.

‘Contradictory statements and hectic changes to guidelines show that Google currently has no strategy for dealing with the topic,’ Beus writes on his company’s website

‘Google’s helplessness shows that this works in an environment such as financial information (a clear Your Money, Your Life topic and therefore highly visible on the radar at Google),’ Beus adds.

Give AI the human touch

As all these examples show, we still need humans to create good content. The human touch is indispensable if you want your content to be factually correct, engaging and ranking high in Google search results. It takes a skilled content creator to check the facts and solve the stylistic errors that are so common in AI content. It is simply too risky to put AI-created content on your website without carefully reviewing it first.

AI chatbots can be a useful tool, especially for short, repetitive tasks like writing emails or product texts with a fixed structure. But to publish longer content that really adds value, you still need human intelligence. 

Sure ChatGPT can generate a blog post called ‘The seven best ways to save money’ in 30 seconds. But you’re just getting a copy of information that your readers can easily find on your competitors’ websites. And ChatGPT can’t help you with current events, because it has no knowledge of the world after 2021. If you want to give your readers insightful, up-to-date content that they can’t find anywhere else, human copywriters are essential.

If you’re still not sure about the role of ChatGPT and the importance of human content writers going forward, just remember what ChatGPT themselves have to say: ‘ChatGPT can be a useful tool, but it’s no replacement for human creativity and judgment.’

Check out our latest report!

Email to colleague

Related Reads