Back when I first started my career in digital marketing, things were peachy.
Why? Because it was super easy to game search engines by link building en masse, including by spamming out articles crammed with backlinks to content farms. You could pretty much guarantee page one results for big-money keywords.
Then, along came Google with its Penguin update, and the industry changed overnight.
Sites that had been engaging in these practices, known as black hat SEO (spooky), were wiped out. Panic set in as client rankings tumbled, or worse, were handed manual penalties by Google and fell off a cliff entirely.
In the past 10 or so years, SEO has evolved to reward sites that put in a lot of hard work. Some of the basic principles, particularly from a technical point of view, remain almost the same.
However, developing successful strategies to get a site to rank on the content and link building side of things is now far more complex. And unlike it was before, results are far from guaranteed.
So how does the emergence of AI technology fit into this? Well, for one, it’s allowed for a resurgence of sites that are absolute dog eggs. And secondly, AI is being pushed by both major search engines, Google and Bing, in a tech-giant willy-swinging contest.
And I think it’s going to break the internet.
What is AI content?
AI content can be produced very, very quickly by feeding prompts into AI copywriting software or AI chatbots, like ChatGTP. While it’s far from perfect, it is capable of producing content that could pass as if it were written by a person.
These programs use artificial neural networks to learn language, process information and then churn out or make suggestions on how to improve content based on instructions provided by the user.
A recent study by Optimo revealed that AI-assisted and AI-created content can rank well, especially in certain categories. And the number of people utilising AI to write copy has risen quickly as these models continue to improve.
Why is AI content a problem?
Quality
Not only has AI-generated content already stiffed copywriters and journalists out of a job, but to add insult to injury, it’s churning out absolute garbage. Great for sites like Buzzfeed, which rely on driving traffic using clickbait headlines. Bad for the content writers (obviously), the content consumer, and the internet overall as the same bland guff is repurposed over and over and over again. This article from Noor Al-Sibai and Jon Christian for Futurism highlights the issue perfectly.
The sheer volume in which AI content has the potential to be produced may also result in genuinely well-researched content being buried.
Plagiarism
The information needed to generate AI content and Google’s Search Generative Experience (SGE) results must come from somewhere.
Producing content that ranks well requires a lot of time and effort. But AI does not weep for the humans.
Humans have done all of the leg work in researching, fact-checking, and writing the content… not to mention designing the page to make it load fast while looking nice, sourcing images, and considering where in the copy it would be helpful to link to other resources.
And then along comes AI to present the user with a cobbled-together version of the information people have worked so hard on, be that in an article, or by answering a search query. All of the resources that have been pooled into writing that content have effectively been ripped off. And do you think AI gives a hoot about copyright? It does not.
Writing for Toms Hardware, Avram Piltch sums it up well:
For years, both users and Google itself have complained about “content farms,” websites that produce shallow, low-quality articles at scale on a wide variety of topics so they can grab top search rankings. Google released a specific “Panda” algorithm update in 2011 that was primarily targeted at content farms and recent updates use the author’s expertise or the helpfulness of the article as ranking factors. However, with its LLM (Large Language Model) doing all the writing, Google looks like the world’s biggest content farm, one powered by robotic farmers who can produce an infinite number of custom articles in real-time.
Not only do AI-generated articles and search results provide inaccurate and out-of-date information, they also don’t cite their sources. This brings us to the next issue – trust.
Trust
It’s already difficult to trust what you read these days. From the proliferation of fake news, to journalists publishing quotes and ‘facts’ from people with absolutely no credentials in the topic being discussed, things are a real mess.
And AI is only going to make things worse.
Not only can you use it to create an article packed with falsehoods and misinformation in minutes, but for AI search results in particular, there is no way to know where it’s gathered the information from. This makes it almost impossible to validate it as being true, or false.
And it’s not just written content that can be generated with AI to spread misinformation. Deepfakes, which are AI-generated images and videos, are also being utilised to deceive people, which is particularly scary when being deployed to push a particular political or social agenda.
Newsguard did an excellent study on Unreliable Artificial Intelligence-Generated News websites (UAIN) recently. Not only do these sites suck when it comes to accuracy and quality, but they are also responsible for ripping off advertisers.
For those of you who are familiar with Google Ads, its Google Display Network (GDN) allows advertisers to serve ads on websites that are part of its AdSense program. For years, I’ve come across some very dubious websites and YouTube channels which are part of this network, but that’s another story.
Anyway, in order to be eligible, websites must not be in violation of its ad policies:
Google’s ad policies state that sites may not “place Google-served ads on pages” that include “spammy automatically-generated content,” which it defines as, among other things, “content that’s been generated programmatically without producing anything original or adding sufficient value.”
The research from Newsguard has shown that this is simply not the case. In fact, more than 90% of ads being shown on these UAIN sites were served by Google Ads.
This means that it is now super easy to make money through programmatic advertising by setting up thousands of crappy websites that only publish AI content. As a result, advertisers may as well go ahead and pour their ad spend down the drain.
As reported by MIT Technology Review, according to research conducted by the Association of National Advertisers, 21% of ad impressions in their sample went to made-for-advertising sites. They estimated that could equate to a global ad spend of $13 billion being wasted annually.
AI has been touted by marketers as a technology that can make our lives easier. And there’s no denying that it does. But many of them are not thinking of the implications.
What are your views on AI taking over the internet? Let me know.