I’ll admit, as a writer, I am most definitely biased on the topic of AI programs, particularly those that produce written content instantaneously and—supposedly—error-free. When these technologies were first popularized a few months back, I remember thinking, “Welp, there goes the market for writers.”
For a moment, it seemed like magic. With a quick wave of a wand—or in this case, typing of a quick prompt—ChatGTP could turn out a 1,500-word article on any topic you wished for. I had professional connections online boasting their articles written by ChatGTP and even had authors approaching me to see if using such technology for their articles was acceptable (it wasn’t).
But over time, questions arose. Where was it getting its information from? How was it producing this content? What were its sources? How was the information produced being fact-checked and edited? As the curtain was pulled back, it became clear that not only was AI not ready to replace writers, but that doing so could lead to issues including sharing false information, fostering brand distrust, and causing reputational damage.
It removes the human experience
In AI writing
At first, watching the AI produce well-written and researched content within seconds can be fascinating. It offers endless information, backed by every source the internet can offer, all for free. But the closer you look, the more imperfections that appear, and the more “off” it begins to feel. The information may seem correct, the vocabulary astounding, and the format impeccable, but upon further inspection, not only may the information be incorrect, but the content often reads like a dictionary; completely factual, robotic (perhaps an unfair criticism, considering it is a robot, but true nonetheless), and superficial.
In other words, it’s lacking a human voice and the knowledge, emotions, and experiences that typically accompany that voice.
Often, the best articles I read are from authors who are passionate about the topic on which they are writing and can use their personal and professional experiences, anecdotes, and understandings to convey their message to their audience. These articles contain a level of nuance and complexity; they use emotional appeals and their understanding of human emotions to connect with their readers in a way that ChatGTP and other such AI tools cannot. ChatGTP cannot feel or inspire. It can tell you the difference between a credit union and a bank, but it cannot truly convey why that difference matters on a deeper level.
This difference may not be obvious upon first reading, but without a human voice clear in the writing, something about feeling off and incomplete.
But what better tool to convey this point than ChatGTP? To test my theory, I gave it the simple prompt, “Why is stealing wrong?” Within seconds, it had written a ten-point bulleted list on the issue. I then did a quick search in an online forum for a human answer to the same question. Here are the results:
I should note that both of these responses have been edited down for the sake of space, (I paired down ChatGTP’s ten bullet points to four, removed a few sections from the human response, etc.) but none of the content has been changed.