“Aren’t you worried AI is taking your job?”

It’s the first question I’m asked when people learn I’m a copywriter. And it comes up regularly with new clients.

The short answer is no.
I have my worries around AI, but we’ll get to those in a bit.

What I want to get out the way first is that I’m not about to tell you to hire me and not use AI. This isn’t what this article is about.

You’re a sovereign being and free to make your own choices. This article is my answer to the above question and an exploration into why.

One of the sharpest in the box

So we all know that AI is a great tool.
Maybe you can’t afford a copywriter right now, and you need to get this certain asset out there. You create something with the help of AI. It does, what looks like, a great job. You breathe a sigh of relief. You’ve just leveraged a tool.

We leverage tools to our benefit all the time. When I go to the supermarket, I prefer to go to the checkout and interact with a human being. But sometimes the queue is too long, my toddler is on the cusp of a meltdown, and the fluorescent lighting is giving me cabin fever. I want to get out of there as soon as possible, so instead of waiting in the queue, I’ll use the machines.
I often end up being flagged to need human assistance and will stand there waiting like a lemon and regretting my choices, but that’s just me.

Anyway.
AI and LLM’s (Large Language Models) have become the first point of contact for many people using the internet, and two things are visibly emerging in full force:

  1. The Dunning-Kruger effect, and

  2. Measurable cognitive decline.

Let’s start with the Dunning-Kruger effect. It’s a bias that’s both endearing and dangerous.


The bias with the most audacity

When you learn a little bit about something, your confidence spikes and you believe you know more than you actually do.
This effect is a humbling reminder of how endearingly fallible we are.

Michael Scott in the US Office? A walking, talking Dunning-Kruger effect.
Joey Tribiani from Friends? Completely overestimates his own abilities.

And me? I have a story. When I first read Samin Nosrat’s Salt, Fat, Acid, Heat and applied the techniques, I was so confident in my cooking I could have happily walked into a professional kitchen and tried my luck. Any actual chef will know better than anyone that I was not ready in any shape or form for the professional kitchen, I was just confident cos I knew some good techniques and my friends said my cooking was really good.

That’s the Dunning-Kruger effect.

And unless you haven’t noticed it already, AI amplifies the Dunning-Kruger effect tenfold.

Asking AI a question about your writing, or asking it to do your writing for you, doesn’t increase your understanding of writing copy or your ability to prompt good copy, it just increases your confidence.

AI systems and LLMs condense vast amounts of data in order to give generic, easily digestible answers. So basically, every answer you get is the generic summary. That median average will completely marginalise unique or unconventional output over time. It homogenises everything.

Let’s strip it down.

AI and LLMs:

  • Simplify definitions

  • Paraphrase explanations

  • Reply in bullet points that feel complete

  • Encourage you, and perpetuate biases

  • Restate basic concepts while concealing information

  • Sound very coherent and friendly.

It helps people think they know more, especially those who lack the expertise to recognise what they’re missing. You don’t know what you don’t know.

If you’re a copywriter using an LLM to ideate, you know which direction to push it in. You can spot what’s inappropriate, you can give feedback, and you can meaningfully collaborate with it rather than just accepting what it’s offering you.


0x10=0

I wouldn’t have the foggiest idea how to design a logo. I don’t know what makes a good logo, what grabs people’s attention, or the psychology behind it.

Why is the Starbucks logo a mermaid and not a coffee bean? I don’t have the expertise to answer. But it’s the first interesting logo that came to my mind. We all know that if I asked AI to create a logo for my coffee shop I’d probably get a homogenised version of a mug that’s “on brand”.

I’d love to cut costs and prompt a nice logo for Sage Copywriting from AI, but the pivot I’d have to do down the line is probably going to be more expensive. Any time you use AI to do a task you wouldn’t know how to otherwise, you are accepting the median average, unconsciously morphing your judgement of high standards, and your own competence. You are falling into bias traps, my friend.

Nothing times ten = nothing.

Nothing times one hundred = still nothing.


Let’s look at an example in the wild. An MIT lecturer used ChatGPT to “understand” the Riemann Hypothesis. The Riemann Hypothesis is one of the most complex, unsolved problems in mathematics.
ChatGPT responded with confidence and simplified analogies. The lecturer didn’t get any closer to solving it or getting it, he just got closer to feeling like he’d understood it.

This is where I feel things start to get dangerous.

Because when everyone starts to believe they can understand anything, expertise declines, standards erode, and the ravine between confidence and competence widens.

Lowering the bar

And on the subject of things eroding.

We’re living through a moment in history where in the West, human intelligence has started to measurably decline. For a long time, scientists observed the Flynn Effect, where the average IQ rose every passing generation (thanks to improved education and nutrition, smaller families, etc).

But in the last 20 years, they’ve started measuring the “Reverse Flynn Effect”. IQ in developed countries peaked, and is now in decline. In some countries by 2-4 points per generation.

Why?

There are many theories, but anyone can see the elephant in the room:

- We’re handing over our thinking skills to machines
- We’re not allowing ourselves one second of boredom
- We’re frying our attention spans with shortform, superficial social media 

We’re basically doing a bunch of things that will naturally lead to cognitive decline. Calculation, memory retrieval, navigation, and nowadays even writing and analysis… when was the last time you did any of that without outsourcing it to technology? Are we making our lives more convenient, or are we allowing our brainpower to atrophy?

These are my worries.

My own biases

I’m aware that I could be straw-manning here. IQ tests—in terms of being able to identify and sort who could be trained for labour/teaching/clerical work—just don’t really fit into the jigsaw puzzle of society today, do they. They’re reliably indicative of “desirables”, like income, educational attainment and social status. But the skills that mattered in the ol’ days of the IQ test don’t map into our present AI-augmented economy. I know they don’t say everything. But they do say something.

And I know my argument has skin in the game. Social media platforms literally depend on capturing and fragmenting our human attention. And the more we delegate mental effort to AI , the more we increase our cognitive debt, creating lasting effects beyond the single prompt you just did.

I reckon that in a decade or so we’ll look at this in the same way we now view smoking in terms of harm to our health.

Future-proofing

I have so many questions about the future of education and our creative and economic potential. Like, what on earth is the point of teaching a large chunk of our math curriculum if we’re going to outsource those skills anyway? If our routine cognitive tasks are being handed over to AI, will our schools nurture skilled trades, physical activity, emotional intelligence and ethical debating more? I hope so.

In the present, as in, February 2026 as I write this, I have a strong belief that if we continue without any guardrails, there’s going to be a shortage of people being able to ideate, innovate, and think for themselves.

To even be capable of ideating, innovating, and thinking for yourself, your mind has to be sharp. Trained. Not atrophied. Because if you can think for yourself, you can think for others. You can lead. And the truth remains: however we choose to measure cognitive capacity in the future, it remains something that is absolutely essential for us to thrive and flourish in the economy, in our communities, and in our individual lives.

I’m not saying it’s all doom and gloom. But I do believe that the trick here is to think carefully about our thinking power, our creativity, how we can preserve and refine it, and what kind of future we want to build.

Redefining value

So am I afraid AI will take my job? No.

Because sooner or later everyone, not just the creatives, will see that LLMs have homogenised marketing, everything sounds the same, and nothing has meaning anymore. The internet is literally crawling with easily distinguishable AI content because it has about as much depth as a crepe pan. It’s always somewhat off the mark. Like a rubbish pizza: still pizza, but it leaves a weird taste in your mouth and your stomach feels bad the next day.

Once we’ve woken up to that, we’ll go “uh oh” when we realise our brains literally can’t approach ideation—like beginning a blog—without going blank and reaching for the help of AI. Cos we handed over the skill.

Then we’ll be scrambling to hire the people who future-proofed their brains by not leaning on AI like a crutch in these early days.

I also believe that we’ll sooncome to value time again. Thrashing out content for content’s sake is a symptom of an AI-augmented economy and completely bulldozes over the value of intentional work from relaxed creative minds.


A large part of creativity is the act of making a new connection by linking known things. You have the experience, then you make the connection. But that does require a bit more time than the quick responses we are coming to expect from AI.

If I need an insight or an idea, I give it space to emerge. That space is the container for miracles.

My best ideas and insights come whenever I’m doing something mundane like taking a shower or doing the dishes. Eureka! And when I’ve given clients that extra bit of space to feel safe and open-hearted in our conversation, or to talk without time constraints, we often get right to the juicy part of the campaign or product. Connections need to be coaxed out while you’re relaxed. It might feel unproductive to take the extra ten minutes, but I would argue it’s the complete opposite. It sets the foundations for excellent copy.

The people who realise this sooner than later will be those with a strong brand voice, reciprocity and trust between their audience and their product, and a team who aren’t burned out.

Because these days, people value quality over quantity. They want to make intentional investments instead of falling into passive consumption everyday. The task now is to speak directly to those values and earn trust by modelling our own integrity. We can’t do that if we’re homogenising everything, sounding like broken records, and feeding into our own cognitive decline.

AI is powerful. But don’t underestimate the power of someone vested in your mission with a relaxed, sharp, and creative mind.