We ran the same prompt through ChatGPT multiple times and it returned almost identical suggestions. So what happens if thousands of agencies turn to ChatGPT to write an article on SEO trends for 2024 at the end of the year? Is everyone going to end up with the same content?
Depth issues with ChatGPT content
None of the content ChatGPT generates overseas data offered anything of depth or real value. Take a look at what it has to say about local search as an example:
Suggesting local search is an SEO trend in 2023 (maybe in 2014) is an underwhelming start. In fact, the whole passage reads like something written a decade ago, mentioning the rise of voice search and mobile devices as if these are also new things. It’s all very generic and lacks any depth or value.
Using ChatGPT to write ad copy
We’ve also tested ChatGPT for writing discuss deadlines with the project manager ad headlines for several of our PPC customers.
It really struggled with adhering to the constraints of headlines in Google Ads – for example, not using exclamation marks and the max 30-character limit. When we tried to correct ChatGPT with follow-up prompts, it actually wrote longer headlines and included exclamation marks.
It’s worth noting that ChatGPT’s algorithm is biased towards longer responses. OpenAI acknowledges this as one of its limitations: “These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues”.
The quality of ChatGPT’s copywriting is pretty poor as well, but this shouldn’t surprise anyone. Copywriting is a highly-specialised niche. ChatGPT is engineered to adhere to natural language but advertising language breaks all of the rules ChatGPT was trained with.
What does this mean from an SEO perspective?
The fastest measurement of ChatGPT europe email as an SEO tool is to compare the content it generates against Google’s quality rater guidelines.
These guidelines are used by Google’s human team of quality raters who are tasked with assessing the quality of its search results. They manually analyse web pages and grade them on an extensive range of factors grouped into four key characteristics:
- Experience
- Expertise
- Authority
- Trust
For a more in-depth explanation, take a look at our analysis of Google’s latest quality rater guidelines (E-E-A-T).
Interestingly, when we asked ChatGPT to write a blog post on key search trends for 2023, it included E-A-T (the older version) and insisted how important they are.
The system’s information may be out of date but it’s right about the importance of E-E-A-T, as highlighted by the recent update to the guidelines. It’s no coincidence that Google is adding more detail to its guidelines at a time when AI-content tools are going mainstream.
As a result, E-E-A-T will become even more important as Google needs to verify the quality of content produced by experts.