ChatGPT is an AI chatbot that can take directions and perform tasks like writing essays. There are numerous issues to understand before making a decision on how to use it for content and SEO.
The quality of ChatGPT’s content is staggering, so the idea of using it for SEO purposes should be addressed.
Why ChatGPT Can Do What It Does
Contents
- 1 Why ChatGPT Can Do What It Does
- 2 Six Things to Know About ChatGPT
- 3 1. Programmed to Avoid Certain Kinds of Content
- 4 2. Unaware of Current Events
- 5 3. Has Built-in Biases
- 6 4. ChatGPT Requires Highly Detailed Instructions
- 7 5. Can ChatGPT Content Be Identified?
- 8 6. Invisible Watermarking
- 9 Should You Use AI for SEO Purposes?
Simply put, ChatGPT is a type of machine learning called a Large Learning Model.
A big learning model is an AI trained on large amounts of data that can predict what the next word in a sentence is.
The more data it trains on the more types of tasks it is capable of doing (like writing articles).
Sometimes large language models develop unexpected abilities.
Stanford University writes about how an increase in training data allowed GPT-3 to translate text from English to French, even though it was not specifically trained to do the job.
Large language models like GPT-3 (and GPT-3.5 which is the basis of ChatGPT) are not trained to do specific tasks.
They are trained with a broad range of knowledge which they can then apply to other domains.
This is similar to how a human learns. For example, if a human being learns the fundamentals of carpentry, they can apply that knowledge to build a table even if that person has never been specifically taught how to do so.
GPT-3 works similar to a human brain in that it contains general knowledge that can be applied to multiple tasks.
The Stanford University article on GPT-3 explains:
“Unlike chess engines, which solve a specific problem, humans are ‘generally’ intelligent and can learn to do anything from writing poetry to playing soccer to filing taxes.
In contrast to most current AI systems, GPT-3 is approaching such general intelligence…”
ChatGPT incorporates another large language model called InstructGPT, which has been trained to take cues from humans and long-form answers to complex questions.
This ability to follow instructions makes ChatGPT able to take instructions to create an essay on virtually any topic and do it in any specified way.
Can write an essay within constraints such as word count and including specific thematic points.
Six Things to Know About ChatGPT
ChatGPT can write essays on virtually any topic because it is trained on a wide variety of texts available to the general public.
However, there are limitations to ChatGPT that it is important to know before deciding to use it in an SEO project.
The biggest limitation is that ChatGPT is not reliable for generating accurate information. The reason it’s inaccurate is because the model only predicts which words should come after the previous word in a sentence in a paragraph on a given topic. It’s not about accuracy.
This should be a top concern for anyone interested in creating quality content.
1. Programmed to Avoid Certain Kinds of Content
For example, ChatGPT is specifically programmed not to generate text on topics of explicit violence, explicit sex, and harmful content such as instructions on how to build an explosive device.
2. Unaware of Current Events
Another limitation is that it is unaware of any content created after 2021.
So if your content needs to be up to date and fresh, ChatGPT in its current form may not be of use.
3. Has Built-in Biases
An important limitation to be aware of is that he is trained to be helpful, truthful, and harmless.
These are not just ideals, they are intentional biases built into the machine.
It seems that programming to be harmless causes the output to avoid negativity.
This is a good thing, but it also subtly changes the article from one that could ideally be neutral.
In a sense, you have to take the wheel and explicitly tell ChatGPT to drive in the desired direction.
Here is an example of how bias changes the output.
I asked ChatGPT to write a story in the style of Raymond Carver and another in the style of mystery writer Raymond Chandler.
Both stories had happy endings that were unusual for both writers.
To get a result that matched my expectations, I had to guide ChatGPT with detailed directions to avoid happy endings and for the Carver-style ending to avoid a story resolution because this is how Raymond Carver’s stories often unfolded.
The point is that ChatGPT has biases and that you need to be aware of how they might affect your output.
4. ChatGPT Requires Highly Detailed Instructions
ChatGPT requires detailed instructions to produce higher quality content that has a better chance of being highly original or taking a specific point of view.
The more instructions you provide, the more sophisticated the output will be.
This is both a strength and a limitation to be aware of.
The fewer instructions there are in the content request, the more likely the output is to share similar output with another request.
As a test, I copied the query and output that multiple people posted on Facebook.
When I asked ChatGPT the exact same question, the machine produced a completely original essay that followed a similar structure.
The articles were different but shared the same structure and touched on similar subtopics but in 100% different words.
ChatGPT is designed to choose completely random words when predicting what the next word in an article should be, so it makes sense that it isn’t plagiarizing.
But the fact that similar requests generate similar articles highlights the limitations of simply asking “give me this. “
5. Can ChatGPT Content Be Identified?
Researchers at Google and other organizations have been working on algorithms to successfully detect AI-generated content for many years.
There are many research papers on the subject and I will cite one from March 2022 which used the output of GPT-2 and GPT-3.
The research paper is titled, Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers (PDF).
The researchers were testing to see what kind of analytics could detect AI-generated content that used algorithms designed to evade detection.
They tested strategies such as using BERT algorithms to replace words with synonyms, another that added spelling mistakes, among other strategies.
What they found was that some statistical characteristics of AI-generated text such as the Gunning-Fog Index and Flesch Index scores were useful for predicting whether a text was computer-generated, even if that text had used an algorithm designed to evade the detection.
6. Invisible Watermarking
Of more interest is that OpenAI researchers have developed cryptographic watermarks that will help detect content created through an OpenAI product like ChatGPT.
A recent article called attention to a discussion by an OpenAI researcher available in a video titled Scott Aaronson Talks AI Safety.
The researcher says that ethical AI practices like watermarking can evolve to become an industry standard the way Robots.txt has become a standard for ethical crawling.
“…we have seen over the past 30 years that large Internet companies can agree on certain minimum standards, whether out of fear of being sued, a desire to be seen as a responsible player, or whatever.
A simple example would be robots.txt: if you don’t want your website to be indexed by search engines, you can specify it and the major search engines will respect it.
Similarly, you could imagine something like watermarking, if we were able to prove it and show that it works and that it’s cheap and it doesn’t hurt the quality of the output and it doesn’t require a lot of computation and so on, that it would just become an industry standard and anyone who wanted to be considered a responsible player would include him.
The watermark developed by the researcher is based on an encryption. Anyone with the key can test a document to see if it has the digital watermark showing it was generated by an artificial intelligence.
The code can be in the form of how punctuation is used or in word choice, for example.
He explained how watermarking works and why it’s important:
“My main project so far has been a tool for statistically watermarking the outputs of a text template like GPT.
Basically, whenever GPT generates a long text, we want there to be an otherwise inaudible secret signal in its word choices, which you can use to later prove that, yes, this is from GPT.
We want it to be much more difficult to take GPT output and pass it off as if it came from a human.
This could be useful for preventing academic plagiarism, of course, but also, for example, the mass generation of propaganda—you know, spamming every blog with ostensibly on-topic comments supporting the Russian invasion of Ukraine, without so much as a building full of trolls in Moscow.
Or impersonate someone’s writing style to incriminate them.
These are all things that one might want to make more difficult, right?
The researcher shared that the watermark defeats algorithmic efforts to evade detection.
But he also said that it is possible to defeat the watermark:
“Now, all of this can be defeated with enough effort.
For example, if you used another AI to paraphrase the output of GPT, that’s fine, we won’t be able to detect it.
The researcher announced that the goal is to implement watermarking in a future version of GPT.
Should You Use AI for SEO Purposes?
AI Content is Detectable
Many people claim that Google has no way of knowing if content was generated using artificial intelligence.
I can’t see why anyone would hold this opinion because detecting AI is a problem that has already been solved.
Content that implements anti-detection algorithms can also be detected (as noted in the research paper I linked above).
Detecting machine-generated content has been the subject of research going back many years, including research on how to detect content translated from another language.
Autogenerated Content Violates Google’s Guidelines
Google claims that AI-generated content violates Google guidelines. So it’s important to keep that in mind.
ChatGPT May at Some Point Contain a Watermark
Finally, the OpenAI researcher said (a few weeks before ChatGPT was released) that watermarking would “hopefully” come in the next version of GPT.
So ChatGPT might be updated with the watermark at some point, if it’s not already watermarked.
The Best Use of AI for SEO
The best use of AI tools is to scale SEO to make a more productive worker. This usually consists of letting the AI do the tedious work of research and analysis.
Summarizing web pages to create a meta description might be an acceptable use, as Google specifically states that it’s not against its guidelines.
Using ChatGPT to generate an outline or summary of content could be an interesting use.
But outsourcing the creation of content to an AI and publishing it as-is may not be the most effective use of AI for many reasons, including the possibility of it being detected and a site receiving a manual action (aka banned) .
Featured image by Shutterstock/Roman Samborskyi