I recently went to a marketing conference, and I heard the same AI misconceptions being repeated over and over again. The same “I heard somewhere that…” statements that were debunked months and years ago. And it’s holding us back.
When we continue to believe in these AI myths, we end up trying to use AI in ways that don’t show its true potential (like expecting Custom GPTs to churn out on-brand content because you attached your tone of voice guidelines). Or we end up avoiding it altogether because it’s “bad for the environment”.
The reality is, AI is a powerful tool that works brilliantly for some tasks and not so great for others. Understanding which is which requires testing, not mythology.
So here are eight AI myths I’ve heard that desperately need putting to bed – and what you should do instead.
The 8 AI myths that won’t die:
| Myth | Reality | What to do instead |
| Custom GPTs are good for tone of voice | Custom GPTs use RAG to reference documents, not embed instructions | Embed tone of voice rules directly in system messages or user prompts |
| Meta tags are good for AI SEO | Meta tags are just regular SEO – AI needs broader authority signals | Build genuine authority through expert content, PR mentions, and multi-format presence |
| AI is bad for the environment | One query uses 0.34 watt-hours – about what an oven uses in one second | Focus on transport, diet, and heating if you care about your carbon footprint |
| Google not returning 100 results means they no longer use Reddit | Result count has nothing to do with AI overview sources | Create content that genuinely answers questions people are asking |
| AI has zero emotional intelligence | AI excels at sentiment analysis – millions use AI chatbots for emotional support | Use AI for empathetic customer service, audience analysis, and emotional copywriting |
| AI cannot create original ideas | AI generates novel ideas that expert researchers rate as more creative than human ideas | Treat AI as a creative partner for campaigns, content strategy, and pitches |
| Hallucinations are rife | Modern AI models hallucinate less than 4% of the time with proper setup | Use the right model configuration and review processes. Just like you would with any team member |
| Progress has hit a wall | Pre-training and post-training continue to deliver big intelligence increases | Stop waiting for AI to plateau and start building it into your workflows now |
Each myth is explained in detail below, with evidence and practical guidance on what to do instead.
Myth 1: “Custom GPTs are good for tone of voice”
The logic seems sound: create a Custom GPT, attach your tone of voice guide, and boom – instant on-brand content, right?
Wrong.
Why it’s wrong
We’ve tested this extensively at Definition, and we’ve found that attaching your tone of voice guide to a Custom GPT doesn’t work nearly as well as including it directly in the system message or user prompt.
This makes sense because Custom GPTs treat attached documents as reference material, not as core instructions. They use something called Retrieval-Augmented Generation (RAG) – which is brilliant for looking up facts, but struggles with maintaining consistent tone. When a Custom GPT pulls from an uploaded tone guide using RAG, it’s essentially just referencing the document – grabbing relevant snippets when needed – rather than truly absorbing and embodying that tone.
System and user messages, on the other hand, are baked into how the model processes every single request.
What you need to do
Work with people who know how to write proper AI prompts that embed your tone of voice rules directly.
We’ve got a team for that.
Talk to our AI specialistsMyth 2: “Meta tags are good for AI SEO”
Okay, this one’s not exactly a myth – more of a misunderstanding.
Why it’s wrong
Yes, title tags and meta descriptions are good for AI SEO. But they’ve always been good for SEO. This isn’t new. This isn’t an AI thing. This is just… SEO.
When search engines are deciding which brands to recommend, they’re looking for trusted voices across the entire web – not just well-tagged pages on your website.
What you need to do
Start thinking about how to build genuine authority across the web. Some people call this Generative Engine Optimisation (GEO), Answer Engine Optimisation (AEO), or Large Language Model Optimisation (LLMO). Whatever label you use, it’s the new frontier for visibility.
This means:
- Creating clear, expert content that answers the questions people are actually asking
- Secure credible PR mentions across sector media that signal you’re a trusted voice
- Build strong digital signals across multiple formats – text, video, podcasts, social – because search is no longer just text-based
Myth 3: “AI is bad for the environment”
Headlines about AI companies using massive amounts of energy and water have created the impression that individual AI use is environmentally harmful.
Why it’s wrong
Let’s put things into perspective: the average ChatGPT query uses about 0.34 watt-hours (about what an oven would use in a little over one second). It also uses about 0.000085 gallons of water (that’s roughly one-fifteenth of a teaspoon). Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.
So, yes, AI companies use significant energy in absolute terms. But that’s because they’re serving millions of people simultaneously. It’s like saying a stadium uses more electricity than a coffee shop.
Well, yes – because it’s serving thousands of people at once, not five.
The bottom line
The energy and water used by individual prompts don’t meaningfully add to your personal carbon or water footprint.
If you want to reduce your personal carbon footprint, focus on things that actually matter – transport, diet, heating. Not whether you asked an AI three questions today instead of two.
Want more info on this? Check out this research on AI energy usage and AI water usage.
Myth 4: “Google not returning 100 results means they no longer use Reddit as a source in AI answers”
Google recently removed the option to have more than ten search results per page of results, whereas previously users had been able to set their results as high as 100.
Why it’s wrong
This is the nail in the coffin for most keyword tracking companies, but has absolutely nothing to do with the sources AI models are citing in their answers.
I don’t understand why anyone would perpetuate this myth, but it does speak to the level of misunderstanding prevalent in the marketing industry at the moment.
Speak to our AI expertsMyth 5: “AI has an emotional intelligence score of zero”
People assume that because AI doesn’t “feel” emotions, it can’t understand or work with human emotions.
But remember when OpenAI briefly turned off GPT-4o and people were genuinely upset about losing their “friend”? They had to restore access because of the outcry.
That doesn’t happen with tools people think have “zero emotional intelligence”.
This isn’t an isolated case:
- In China, young people are using DeepSeek as alternative to therapists.
- Microsoft’s popular social chatbot Xiaoice has 660 million online users worldwide, who prize it as a “dear friend” because of its ability to relate and interact through nuance, social skills, and, importantly, emotions.
Why it’s wrong
People think “AI cannot ’emotionally’ process content”, when actually it’s excellent at sentiment analysis – processing content is exactly what it excels at.
What you need to do:
Think of AI as a tool that’s good at understanding and working with human emotion. AI can help you:
- craft empathetic customer service responses
- analyse how your audience is feeling about your brand
- write copy that hits the right emotional notes.
You just need to prompt it properly.
Myth 6: “AI cannot create original ideas”
Except… it can.
Because AI is trained on existing data, people assume it can only remix what already exists.
Why it’s wrong
AI is actually really good at generating ideas. A 2024 study recruited over 100 Natural Language Processing (NLP) researchers to write novel research ideas, then had them blindly review both human-written and LLM-generated ideas.
The result? LLM-generated ideas were judged as more novel than human expert ideas.
Likewise, Denario, a new system developed by researchers from universities across the world, can formulate research ideas, review existing literature, develop methodologies, write and execute code, create visualisations, and draft complete academic papers. One of which has just been accepted for publication at an academic conference. And the scary bit? It can produce each paper in approximately 30 minutes for about $4.
If AI can generate research concepts that expert researchers find more novel than their own ideas, what could it do for your marketing campaigns? Your content strategy? Your next big pitch?
What you need to do
Treat AI as a creative partner. Ask it for ideas. Push it to go weirder, bolder, more unexpected. Some suggestions will be rubbish. But some? Some will make you think “Huh. I never would have come up with that.”
Myth 7: “Hallucinations are rife”
This is the big one that stops marketers using AI altogether. “But what if it just makes things up?” they say, clutching their keyboards protectively.
Yes, AI can hallucinate. But let’s talk about what that actually means in practice.
Why it’s wrong
Different models perform differently. And individual models perform differently depending on whether you’ve given them files to analyse, enabled internet search, or you’re using the reasoning or non-reasoning versions.
Here are GPT-5’s (the reasoning version) hallucination rates on established hallucination benchmarks when web browsing is enabled:
- LongFact-Concepts: 0.7%
- LongFact-Objects: 0.87%
- FActScore: 1%
And when browsing is disabled?
- LongFact-Concepts: 1.1%
- LongFact-Objects: 1.4%
- FActScore: 3.7%
Would you only employ someone if they could guarantee they’d never make a mistake? Would you never review someone’s work?
Labs are actively working on reducing hallucinations, and progress is being made. The latest models are significantly more reliable than versions from even six months ago.
What you need to do
Don’t panic. And don’t avoid AI because you’ve had the fear of god put into you. Just get the facts and mitigate the risk.
Be aware hallucinations can happen. Use the right combination of model and tooling to reduce them. Understand the risk. Tailor AI’s role in your workflows accordingly, and review its output just like you would any team member’s work.
Myth 8: “Progress has hit a wall”
You’ve probably heard people say that AI scaling has plateaued. That we’ve picked all the low-hanging fruit. That the next breakthrough is years away.
Why it’s wrong
AI scaling laws continue to hold. Pre-training (the bit that uses masses of training data) hasn’t had as much attention as post-training (teaching AI how to do complex reasoning) recently, but it’s still delivering massive intelligence increases as predicted.
Oriol Vinyals, VP of Research & Deep Learning Lead at Google DeepMind and Gemini co-lead, recently said:
“The secret behind Gemini 3? Simple: Improving pre-training & post-training. Pre-training: Contra the popular belief that scaling is over […] the team delivered a drastic jump. The delta between 2.5 and 3.0 is as big as we’ve ever seen. No walls in sight! Post-training: Still a total greenfield. There’s lots of room for algorithmic progress and improvement […].”
Both pre-training and post-training are delivering significant improvements. The exponential curve hasn’t flattened – we just got temporarily distracted by one type of progress while another was quietly accelerating in the background.
What you need to do
Stop waiting for AI to plateau before you take it seriously. The capabilities are increasing, not decreasing. Every month you delay integrating AI into your workflows is a month your competitors are pulling ahead.
Start building AI into your processes now. Test what works. Iterate. Because by the time you think AI has “arrived”, it’ll already be too late to catch up.
BONUS myth: AI copy detectors work
Think about this logically, there would need to be clear and obvious signs in the copy itself.
Why it’s wrong
We’ve all heard of common telltale signs (e.g. the much maligned em-dash) but:
a) it’s soooooo insanely easy to alter the tone and style of AI output with a simple prompt, instantly rendering all copy detectors redundant
b) telltale signs are naturally short lived
c) even OpenAI could only detect AI generated copy 25% of the time, and that was back in 2023 when they were testing GPT-3, a very old model – and then six months later they shut their detector down citing “the AI classifier is no longer available due to its low rate of accuracy”. Easy rule in AI land, if one of the major labs can’t do it, you can be damn sure no-one else can either
d) lots of research exists confirming they don’t work.
What you need to do
Use prompts and fine tuned models to define your copy and embed your tone of voice in it. This should be standard practice for every brand anyway.
We can help you get started with AI
AI is developing faster than our ability to understand it. So we cling to outdated information, repeat what we’ve heard at a conference six months ago, and make decisions based on myths rather than reality.
But while you’re debating whether AI has feelings, your competitors are using it to write better content, understand their audiences more deeply, and work more efficiently.
The solution isn’t to avoid AI – it’s to use it properly. That means:
- Testing to find out what works
- Understanding AI’s genuine strengths and limitations
- Building workflows that leverage AI where it excels
At Definition, we’ve spent months experimenting with AI across content creation, SEO and brand strategy. We’ve built prompt libraries, tested different approaches, and figured out what actually delivers results vs. what’s just hype.
Want to talk about using AI properly in your marketing?
Let's chat
Written by Luke Budka, AI director at Definition on 14/11/2025