Why I Never Use AI to Create Professional Content
Artificial Intelligence has proven to be a wonderful tool for writers looking to increase their efficiency and eliminate pointless busywork from their routines. However, it can be tempting for a writer to rely too heavily on AI chatbots in the age of mass-produced content.
Yet, as we’ll see below, there are several reasons why established writers should steer clear of using AI in their professional work. It might be great for gathering your favorite recipes into an organized system or personalizing cover letters and resumes, but when it comes to writing that you are selling, especially jobs for clients, there are some major issues with AI-produced content.
Copyright
In the current Wild West climate we are all living through when it comes to AI content, who owns this content stands out as the single biggest pitfall any professional writer can encounter when producing text using AI. The simple fact of the matter is, right now, no one can answer the question, “Who can claim ownership of the content produced by a non-human entity?”
Does the prompt engineer who created the specific content own that content? Does the company that created the chatbot own it? Do the engineers who built and trained the chatbot own it? Do the writers whose original works were used to build and train predictive AI models own it?
Unlike with text that you produced with your own brain and fingers, there is a laundry list of different entities that could potentially claim ownership of writing produced using AI. This is especially true given that the U.S. Copyright Office will not register any work created by AI. In their view, no one owns the content. This means everyone can and likely will try to claim ownership of AI text in the courts.
This problem is massively exacerbated for those of us who work with clients. Typically, when you write content for another party, that party will claim ownership of the content you produce for them in your work contract. This means when they pay you, part of what they’re paying you for is not your actual labor but the right to ownership of the intellectual property that was produced with your labor.
If the labor of a non-human entity produced the content that your client is publishing, their right to claim its copyright as their own, as per a typical contract, goes out the window. For now, it’s public domain.
This fact not only puts you in violation of your own employment contract, but it also robs your client of the copyrights that they are paying you to claim because you no longer have the ability to offer said copyrights.
This means both you and your client are now open to lawsuits and damages, especially if it turns out that the courts eventually decide the producer of AI-generated content owes the true owner of the content attribution or royalties. In that scenario, the real owner of that content can then turn around and sue chatbot creators and maybe even anyone who ever published anything generated by AI.
Of course, all of the copyright issues surrounding AI chatbots will eventually be settled, but that will require years of litigation. As of now, you should be aware that you do not own the copyright to anything you produce using AI and neither does anyone paying you for said copyright.
For this reason, any work you are paid to produce, you should produce yourself, including works you generate and publish to sell on your own. The risks of being sued for copyright violation down the line are still present even if you are not being paid by a client.
Plagiarism
Plagiarism goes hand in hand with the copyright ownership issues I discussed above, but it deserves its own section because it is an ethical issue as well as a legal one. AI chatbots are models developed to process natural language and spit out text appropriate to a given prompt.
This would not be a problem, except these models are trained using an enormous dataset drawn from a broad range of sources from whatever language is being used to train them. This means generative AI is, at its core, regurgitating the work of others.
Of course, this is another legal issue that is undoubtedly going to be litigated into oblivion, but it also opens up ethical issues that any serious professional writer should be concerned about, and not just because our work is undoubtedly being used to train AI if it is published practically anywhere.
Training AI using the work of human authors creates issues surrounding wage theft, intellectual property theft, and a host of other concerns stemming from copyright violations inflicted on human authors by AI and its creators.
As professional writers, we are all part of a larger community that is already frequently devalued and dismissed as frivolous and unessential, especially by the STEM-focused crowd that uses our works to create their own, often without attribution or even a cursory acknowledgment that the technology they develop would be impossible without us.
Thus, we should all be deeply concerned with the plagiarism issues surrounding AI chatbots — regardless of the legal ramifications of said plagiarism — because when we plagiarize using AI, we’re hurting other writers, ourselves, and the wider community of readers who now have to sift through a bog of AI content if they want to read human-generated writing.
Privacy
The issue that sends a shiver right down my spine when it comes to generating content using AI is privacy. When you are using the interface of a chatbot that you did not create yourself and do not control in its entirety, you must assume you are being watched. Anything you ask a chatbot to do and anything you produce using that chatbot is potentially open to view.
Of course, this might concern anyone who has ever asked AI an embarrassing question or had it do something of questionable legality for them. But the problems run much deeper for professionals who decide to use it to generate their work.
This is because any restricted or sensitive information you share with a chatbot could be viewable to third parties who use your interactions with the bot to fine-tune it or improve the performance of the next model. You may have anonymity, but you have no privacy, and this could become a problem for those who input information that requires privacy.
For instance, ChatGPT logs every conversation you have with it to use as potential training data. OpenAI explicitly states this fact in their privacy policy. Anything and everything you type into that bot may be viewed by anyone in the company and used to train the model in any way at any time.
That includes confidential and even proprietary information for which a company might be ready to inflict real legal consequences if it were to leak. This is why anyone writing as a contractor or employee for a company should resist the temptation to seek a little AI-generated productivity boost. It could allow wandering eyes to access information they shouldn’t.
Quality
Of course, the quality issues with AI-generated writing have been discussed ad nauseam, but it bears repeating that if you are a professional writer who relies on AI content, your reputation for quality pieces could suffer.
If your clients wanted lifeless, redundant, assembly-line content, they would not be using you. They would fire all their writers, hire a prompt engineer, and pay the license fee for a bot that pumps out the same three or four iterations of the same exact content.
Since this is already happening to some extent and can be expected to happen more in the future, you can safely assume that any company or person paying for your writing wants you, not GPT.
They want a human’s ability to take boring subjects and inject a level of interest into them through the written word. They want your human brain’s ability to maximize SEO without running afoul of rules surrounding keyword stuffing. They want the human emotion and human experience you can infuse into any content. Ethically and legally, you should give them what they’re paying for.
Bottom Line
Of course, AI will continue to develop, and, contrary to many naysayers, this is not a bad thing. We as professional writers should not be trying to stop the AI train. That would be foolish, impossible, and ultimately detrimental to forward technological progress.
The point is not to avoid embracing AI but to avoid embracing it unconditionally and without deep thought on the ways it can be used to inflict harm or the risks inherent to diving into the deep end without knowing what’s down there.
That’s why, at least for the foreseeable future, I will be keeping my copyrights firmly in my hand by avoiding outsourcing my work to a bot. I might let it give me synonyms for words or ask it to compile information for me, but if you’re reading words under my name, you can be sure I’m the one who wrote them.
P.S. It’s easy to feel trapped into using AI because no human can ever compete with its speed, so don’t miss my piece about how to write faster without AI.