AI is everywhere, whether consumers like it or not.
I’m personally on the fence when it comes to AI to some extent. Yes, it can occasionally be helpful, but I have major concerns around how the technology is being used; to scam people, to create deepfakes and rage bait for political gain, the impact on both the environment and people, and how much AI slop is accelerating the general ‘enshittification’ of the internet (for more on that, I recommend this excellent book by Cory Doctorow). But I digress.
For marketers, the promises that AI companies state their products can deliver sound appealing: lower costs, faster turnaround times, and infinite creative variations at the click of a button. But as recent headlines have shown, rushing into utilising AI tools can lead to reputational damage and costly errors.
While AI tools offer opportunities for research and creative efficiency, they are not even close to being the ‘perfect’ solution many hoped for.
Here is why you need to proceed with caution, and how to leverage these tools effectively without losing the human touch.
The high cost of AI hallucinations
One of the most significant risks in using AI for professional work is ‘hallucination’, where an AI model confidently invents facts, figures, or sources. Given that these models have been trained on all of the available data created by humans, you might expect LLMs to at least be able to regurgitate this information accurately. But you’d be wrong.
A stark example of this recently hit the headlines involving the consultancy firm, Deloitte. The consulting giant faced major backlash after it was revealed that it used AI to generate reports for government clients, including in Australia and Canada. The AI didn’t just analyse data; it fabricated entire sections, making up experts and citations that did not exist.
Being caught was obviously embarrassing for Deloitte, but when you consider that one of the reports was for Australia’s Department of Employment and Workplace Relations, and the findings would have been used to inform policy that could impact welfare recipients, we move into dangerous territory where the shitty work produced by AI hurts real people. But they didn’t care about that, and instead decided to double down, only offering to refund part of the cost of this report (which cost $440,000 AUD).
The fallout was still pretty severe, reportedly costing hundreds of thousands of dollars in refunds and damage control. For marketers, the lesson is clear: if you use AI for desk research or content creation, every single citation needs to be fact-checked by a human.
Relying on an LLM for accuracy is a gamble you shouldn’t take. Yes, it can compile information quickly and at scale, but it’s pointless if the information is completely wrong. If you’re putting a piece of research out there, you’d better make sure it’s correct. If you don’t check it, someone else will. And you’ll be left looking incredibly silly with your pants pulled down.
Creative backlash: when ads lose their soul
It isn’t just data where AI struggles; it’s also in capturing genuine human emotion. Consumers are becoming increasingly savvy (and critical) of low-effort AI content.
During the festive season, Coca-Cola faced a wave of negative sentiment for its ‘Holidays Are Coming’ ad remake, which relied heavily on AI-generated visuals. Critics labelled the campaign ‘soulless’ and ‘creepy’, while others pointed out inaccuracies, like the lorries changing dimensions in almost every shot.
Similarly, McDonald’s has faced criticism for its festive ad – ‘It’s the most terrible time of the year’ – very festive, I think you’ll agree. The ‘story’ of the ad is that if you fall over on the ice and crack your skull, almost hang yourself while putting up fairy lights, or hate spending time with your family, eating a Big Mac will make it all better. McDonald’s was clearly expecting some backlash as a result of churning out AI slop, as the agency that created the ad disabled comments on the YouTube video.
These examples highlight a growing consumer fatigue with AI-generated creative work. When brands cut corners, audiences notice, and the ‘uncanny valley’ effect can quickly turn a campaign into a PR nightmare. At the end of the day, these companies might have saved a few bucks by sacking their entire creative team, but in the long term, is it worth the negative fallout when consumers think less of your brand and therefore decide not to part with their hard-earned cash?
Arguably, both Coca-Cola and McDonald’s are so massive that they don’t actually give a shit, as people will buy their products anyway (not me, though, as they are both on the BDS list, but again, that’s a rant for another blog post about ethics, probably).
With all the hate these ads are getting, smaller brands should be cautious when using AI creative, especially those that claim to hold values or care about their customers.
The reality gap: AI agents vs. human freelancers
Beyond the headlines, is AI actually capable of doing the work of a human professional? According to recent research, the answer is often no.
A paper from the Center for AI Safety published the ‘Remote Labor Index (RLI)’, which tested whether AI agents could realistically perform work-based tasks.
Researchers tested AI against a wide spectrum of real-world jobs, including product design, architecture, and video animation. Far from simple admin, this dataset included high-value projects costing over $10,000 and requiring 100+ hours of effort. The study covered more than $140,000 worth of skilled human labour, totalling over 6,000 hours of work.
The results were telling – showing a currently giant gap between what AI agents can do and what a real human worker delivers. The study revealed that “Across evaluated frontier AI agent frameworks, performance sits near the floor, with a maximum automation rate of 2.5% on RLI projects.” What a great result!
AI agents failed to complete complex tasks to a professional standard, often getting stuck or producing sub-par outcomes. In many cases, a human on a gig economy platform could perform the same work to a much higher standard for the same, or even lower cost.
This debunks the fallacy that replacing entry-level or freelance labour with AI is an immediate route to efficiency. If you need reliable, high-quality output, the human workforce is still your best bet.
I encountered an issue when using AI tools this week, which demonstrated this perfectly. A simple task, I assumed, for the AI agent – transcribe a video, with timestamps. I provided the file as an MP4, gave some clear instructions, and thought, “cor, it is going to be fab not having to type all this up”.
However, for whatever reason, the AI decided that the video ended 10 minutes early, and was adamant that the last part of the video was just the logo of the production company (it wasn’t). I also asked it to transcribe verbatim, as it was needed for an SRT file, among other things, so accuracy was key. Regardless, it decided to chop out huge parts of the discussion, or just made stuff up. When I pulled it up on this, it said it wouldn’t do it next time. And then, of course, it did exactly the same again.
By this point, I was getting pretty wound up. So, I ended up using yet another AI tool to extract the raw transcript file, then I watched the entire video on 1.5x speed to check for accuracy, correcting any mistakes, and then could finally use the original AI tool to put the transcript into a more digestible format for the client than a minging-looking .txt file. This all took me about an hour.
Could I have typed the transcript faster? No, but a professional transcriber probably could have. Would it have taken me maybe another hour or so longer to have watched it on 0.5x speed and typed it out myself, given that I faffed about for about an hour using AI, like I always had done previously? Yeah, I reckon it would. And saved me a few less grey hairs.
How to use AI for marketing the ‘right’ way
So this has all sounded pretty negative so far, eh? But when used strategically, AI can be a powerful assistant in your marketing toolkit.
Here are four areas where AI can be genuinely helpful, provided you keep a human in the loop:
- Processing huge datasets: AI can crunch numbers and analyse vast amounts of data far faster than a human team, helping you compile reports and spot trends for analysis.
- Content sorting and ideation: You can use AI to sort messy content into smaller, thematic chunks. This is excellent for identifying new angles or content gaps that might not be immediately obvious, especially if you’ve been entrenched in the project for months on end.
- Desk research at scale: AI can summarise long documents or scan the web for broad information, giving you a head start on research (as long as you verify the sources).
- Creative efficiency: For internal storyboarding, minor editing, or creating rough mock-ups, AI can make the production process cheaper and faster, allowing creative teams to focus on producing the final, polished version.
The golden rule: keep humans in the loop
The common thread across all these positives is the need for supervision. Because of the high hallucination rate and the risk of ‘soulless’ creative output, AI should never be the final sign-off.
Humans must remain the architects of the strategy and the guardians of quality. AI is a tool to help you build the house, but you wouldn’t want it pouring the foundations without an architect watching over it.
AI tools can, and should, be used in modern marketing to improve efficiency and spark new ideas. However, the technology is not ready to take the wheel, and if I’m being honest, I don’t think it ever will be. By treating AI as an assistant rather than a replacement for expert staff, you can avoid the costly mistakes of Deloitte and the creative fumbles of major global brands.