Why AI Cannot Replace Your Writer: Part Two

146 views
0

In part one of this series, we discussed the growing popularity of AI-generated content and where it lacks in terms of creativity and voice. While this AI work may seem like a masterpiece at first sight, it begins to show its flaws upon further inspection, most notably, its distinct lack of a human voice. For part two, we will be delving into another concerning aspect of AI creation: legal and ethical concerns.

Plagiarism and ethics concerns

If those beautiful photos of orange lips and deformed hands weren’t enough to dissuade the use of AI, there are other reasons to be hesitant to allow AI to generate content for you. Namely, the ethical and legal issues and debates that the creation and use of AI content are currently steeped in.

The creation of content

In regard to the creation of content, companies behind AI are often using data, creations, and information taken from the internet without any regard to the owner of that content. AI art, for example, learns by examining millions of pieces of artwork and replicating what it learns. However, neither the AI nor the company behind the AI has the rights to those images.

This is causing conflict for many companies who are attempting to illegally use consumer data and content to improve their AIs. Google is currently facing a lawsuit for allegedly using copywritten works to evolve its AI, Bard. The lawsuit claims that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” along with “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

An article on the lawsuit notes that “The eight plaintiffs have accused Google of taking a variety of content they shared on social media without permission, ranging from photos on dating websites to playlists saved on Spotify to videos uploaded onto TikTok. One of the claimants, who is described as a best-selling author from Texas, more specifically accused Google of copying a book they wrote in its entirety to train Bard.”

Google didn’t outright deny the claims but simply argued that the company has been open about the use of publicly available data to train its services such as Google Translate, and Bard is just the most recent addition to that. CNN noted that lawyers for the claimants argued in turn that “publicly available” does not mean “free to use for any purpose.”

However, if Google is found to have been illegally using data to train its AI, any content created from Google’s Bard could be in violation of copyright laws, which could cause complications for companies using AI content.

Google isn’t alone in the controversy either. Popular virtual communications company, Zoom, recently came under fire for adding a clause into its updated terms and conditions which seemingly gave the company unlimited rights to and use of use consumer data (including their meetings, faces, files, conversations, etc.) to train its AI. The terms identified two types of data: service-generated data (i.e. features they use, location, etc.) and customer content (transcripts, recordings, etc.). The terms, in two separate places, stated that both types of data could be used for “machine learning or artificial intelligence.”

According to AP, “Experts said that the language in the March update was wide-reaching and could have opened the door for the company to use that data without additional permission if it wanted to.”

Though the change was instituted in March, users became aware of the issue when a concerned Zoom user posted the terms on social media, leading to public outcry and a call to delete Zoom accounts. As a result, many individuals and companies alike were claiming to have removed the service. Zoom quickly altered the wording in its terms of service and claimed it “will not use audio, video or chat Customer Content to train our artificial intelligence models without [user] consent.”

The use of content

Even if AI content is created ethically and legally, there is still great controversy over the use of such media. Schools and universities are torn on whether or not to outright ban the technology, with some claiming it can be a tool for research purposes and others—like New York City’s public schools, who banned ChatGTP—arguing that it is plagiarism.

However, as AI is a piece of technology and not a person, experts say plagiarism doesn’t apply, as there isn’t a person in this scenario being stolen from. On the other hand, regardless of whether or not AI is a person, the work students and professionals are using is not their own. For students, this seems to be a black-and-white issue, but for professionals, this can be a gray area.

Despite my bias against AI, there are no doubt numerous places where it is useful and can do a great job–so long as you’re not asking for orange juice. For those who lack writing skills, AI can be a great tool for writing a resume or creating short, informational content for social media. Even credit unions can benefit from this technology. In fact, CUSO Magazine author and web application developer Sam Lechenet shares great advice in his article on using AI-generated content for credit union websites (though I’d recommend you leave the “about us” page to a real person).

But beyond such applications, using content—stolen or not—that is not created by the person using it (and potentially putting their own name on it) can come with a risk, or at least, an unsettling feeling. There are several reasons to reconsider using AI writing as your own. If putting your name on something you didn’t actually create doesn’t stop you, the fact that such content is unchecked and without sources should.

It’s true that AI continues to learn and improve every day, but as our images demonstrated earlier, even the most intelligent of artificial intelligence can leave an orange hanging in midair or fingers floating without a hand to attach to. But users of such technology tend to have a high level of faith in its accuracy—perhaps because they assume the AI is smarter than it is or because it has endless resources to pull from it can’t go wrong—which can lead to incorrect information being published and shared.

Men’s Journal, a well-known publication, learned this lesson the hard way after publishing an AI-generated article that contained incorrect and potentially harmful information. The article in question, which came less than 24 hours after the Arena Group, the owner of Men’s JournalSports Illustrated, and multiple other publications had announced it would be using AI content to help create articles, was found by an expert to have no less than 18 “inaccuracies and falsehoods.” The publication claimed the article had been fact-checked by its editorial team, but was quick to quietly tweak the article and remove the errors.

It seems their blind faith in the technology was misplaced, though Sam Altman, one of the founders of OpenAI, which created ChatGTP, said back in December 2022 that no such faith should exist to begin with. In a tweet, he wrote, “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

When users responded, pointing out that so far, they had had good experiences with the program and the information seemed correct, Altman replied, “it does know a lot, but the danger is that it is confident and wrong a significant fraction of the time.”

This confidence–and in turn the user’s confidence in the technology—can lead to big mistakes, such as the Men’s Journal issue. Thankfully, the content in that situation was corrected, but these inaccuracies were only discovered because of the discussion around the Arena Group’s announcement to use AI. Had the publication been less known, such as a small health blog, it’s possible the errors could have gone unnoticed and caused issues for readers who were getting false health advice.

Professionals and companies using this content to post to their social media, website, blogs, or for other work purposes, should be sure to heavily review the content. Posting inaccurate and unchecked information can lead to reputational damage, lost user loyalty, and brand distrust.

Maybe one day, but not yet

At the end of the day, the decision on whether or not to use AI such as ChatGTP, Bard, and others, comes down to the context and the user, though a warning label such as “buyer beware” feels warranted. After all, putting your name or your credit union’s name on AI-generated content can have negative impacts that should be considered when choosing to do so, even if that content is error-free—which as we know, is not often the case.

However, AI will continue to learn and improve at an impressive rate, so who knows? In a year it could be writing sonnets better than the greats.

Author

Your email address will not be published. Required fields are marked *