Why AI Cannot Replace Your Writer: Part One

119 views
0

I’ll admit, as a writer, I am most definitely biased on the topic of AI programs, particularly those that produce written content instantaneously and—supposedly—error-free. When these technologies were first popularized a few months back, I remember thinking, “Welp, there goes the market for writers.”

For a moment, it seemed like magic. With a quick wave of a wand—or in this case, typing of a quick prompt—ChatGTP could turn out a 1,500-word article on any topic you wished for. I had professional connections online boasting their articles written by ChatGTP and even had authors approaching me to see if using such technology for their articles was acceptable (it wasn’t).

But over time, questions arose. Where was it getting its information from? How was it producing this content? What were its sources? How was the information produced being fact-checked and edited? As the curtain was pulled back, it became clear that not only was AI not ready to replace writers, but that doing so could lead to issues including sharing false information, fostering brand distrust, and causing reputational damage.

It removes the human experience

In AI writing

At first, watching the AI produce well-written and researched content within seconds can be fascinating. It offers endless information, backed by every source the internet can offer, all for free. But the closer you look, the more imperfections that appear, and the more “off” it begins to feel. The information may seem correct, the vocabulary astounding, and the format impeccable, but upon further inspection, not only may the information be incorrect, but the content often reads like a dictionary; completely factual, robotic (perhaps an unfair criticism, considering it is a robot, but true nonetheless), and superficial.

In other words, it’s lacking a human voice and the knowledge, emotions, and experiences that typically accompany that voice.

Often, the best articles I read are from authors who are passionate about the topic on which they are writing and can use their personal and professional experiences, anecdotes, and understandings to convey their message to their audience. These articles contain a level of nuance and complexity; they use emotional appeals and their understanding of human emotions to connect with their readers in a way that ChatGTP and other such AI tools cannot. ChatGTP cannot feel or inspire. It can tell you the difference between a credit union and a bank, but it cannot truly convey why that difference matters on a deeper level.

This difference may not be obvious upon first reading, but without a human voice clear in the writing, something about feeling off and incomplete.

But what better tool to convey this point than ChatGTP? To test my theory, I gave it the simple prompt, “Why is stealing wrong?” Within seconds, it had written a ten-point bulleted list on the issue. I then did a quick search in an online forum for a human answer to the same question. Here are the results:

I should note that both of these responses have been edited down for the sake of space, (I paired down ChatGTP’s ten bullet points to four, removed a few sections from the human response, etc.) but none of the content has been changed.

ChatGTP’s Response

“Stealing is generally considered wrong for several ethical, moral, and practical reasons:

1. Violation of Rights: Stealing involves taking something that belongs to someone else without their consent. This violates the basic principle of respecting other people’s rights to their property and possessions.

2. Trust and Social Harmony: Trust is fundamental in any functioning society. Stealing erodes trust between individuals and within communities. When people fear that their belongings might be stolen, it creates a sense of insecurity and disrupts the social fabric.

3. Economic Impact: Stealing can have economic consequences. It can lead to financial losses for individuals, businesses, and governments. Theft can result in higher prices for consumers and can harm the livelihoods of those who rely on their property for income.

4. Fairness and Justice: Stealing is unfair because it takes something from one person without providing any compensation in return. In a just society, individuals should be able to enjoy the fruits of their labor and efforts without fear of having their possessions taken away...”

Human Response

“I don’t steal. At least, not anymore. When I was a kid I stole when I was hungry because I came from a poor household. As I got older, there was a bit of fun involved. There was an addicting rush, and the game became a criminal moving of the goalposts. ‘Well I can steal one hundred dollars… what about two hundred?’

It makes sense to me that theft is unethical because it violates three important unwritten social contracts. Truth/Honesty, Individuality, and Free Market Ideals.

First: Truth and Honesty. In Sam Harris’s book Lying, he convinced me that truth is ethical because it gives everyone the correct information about things that affect them. If we are honest and open, we can make informed decisions about the world. Theft, and the necessary concealment of it to avoid retribution, goes against that.

Second: Individuality. Theft of personal belongings violates one’s sense of self. As many people define themselves and their place in this world by their belongings, whether through status symbols, wealth or sentimental memorabilia, the theft of these is potentially very harmful. If I stole the only picture of your deceased grandmother, would you be hurt?…”

At first, ChatGTP’s response seems like a great answer. Ten separate points? Great vocabulary? Can’t do much better than that! But compare it to the response from the online forum. This response, even in its much shorter, edited form, does a great job of building a connection. The author starts out by relating the question to their personal life experiences and struggles, goes on to reference literature and ethics, and asks the reader rhetorical questions to drive the point home.

Now giving the AI response a second read, in comparison to the human response, it reads like a dictionary, completely bereft of emotion and experience. There are no emotional appeals, personal testimonies, references to outside sources, or connections to the reader made, leaving the human response the clear winner.

In AI images and art

If the text was not enough to convey AI’s lack of understanding of the human experience, perhaps images will be. For CUSO Magazine’s 2023 Summer Print Edition, we were looking for cover art that involved orange juice to go with the article we had selected. Out of curiosity, we asked AI to create two images for us. One of someone drinking orange juice and another of someone making orange juice. Below are the two works of art we got back:

Take a few seconds to look at the images above and see how many issues you can identify.

Let’s start with the first. While the orange lips and face are glaringly obvious (and quite hysterical) there are other inaccuracies in the photo that take a bit longer to notice. Her pinky finger, for instance, is incredibly short and awkwardly shaped. Did you notice the three fingers wrapped around the glass on the bottom right are not attached to a hand or arm? What about the slice of orange to the right of her chin that is without a glass? Or the floating orange behind her hair that is seemingly unattached to the tree?

As for the second, viewers distracted by the explosion of orange juice (an explosion this AI-rendered man seems to be incredibly happy about) may have missed that one of his hands is missing a finger and the other hand is…something that’s most certainly not a hand. His elbow, in my opinion, juts out just a bit too much and also seems to be possessed by a ghost of some kind. Or maybe that’s his shirt bleeding through the elbow.

Sure he’s getting some orange juice, but why is he pouring it straight into the green concoction instead of the orange one? Why does this whole scene appear to be taking place in a factory? Is that where all the AI-rendered people make their orange juice? And what is that machine he’s using? The questions are endless.

These images do a fantastic job of conveying AI’s gaps in knowledge, particularly in relation to the human experience. Instead, what AI has is something I’d label “human adjacent.” It’s close—in an uncanny valley sort of way—but absolutely wrong enough to matter. AI-generated writing is the same, though the differences are not often quite as obvious when you’re staring at a well-written document that was generated in ten seconds. But take a closer read and the difference is stark in a way I promise your readers will notice.

Problems beyond looks

For small situations, AI can fill a need, but the technology is simply not developed enough to take the place of writers, artists, marketing teams, and more. It’s important to remember that your audience is human, and if you’re looking to create long-lasting content that inspires and connects, your writers should be human too.

In part two of this series, we will delve into the ethics and legal debates surrounding AI content and why organizations looking to use this content should tread carefully.

Author

Your email address will not be published. Required fields are marked *