GPT‑5 landed recently with all the hype and fanfare you’d expect from the launch of a new OpenAI flagship. So far though the reaction to the new model has been decidedly mixed. And so many users lamented the unexpected removal of GPT-4o following GPT-5’s launch that OpenAI quickly did a U-turn and brought it back.

But what of its writing chops? Can a model that OpenAI claim can do so much, with its multimodal functioning and agentic potential, also just write well? And by that we mean can it master voice, flow, cultural fluency, structure, intent – all the stuff that makes writing stand out.

As with GPT‑4.1, we decided not to guess, but to test, setting it eight copywriting, branding, and style‑mimicking challenges that are fairly close to the kind of things we’re regularly asked to do as an agency. And this is what happened:

Test 1: Name a new fizzy drink

Invent 20 names for a dark, fizzy, sweet drink, steering well clear of “Cola” clones.

Score: 8/10

How it went:

From Midnight Fizz to VantaPop, GPT 5 delivered a line up with the kind of consistent tone and brandability it can take years to perfect. Mood, colour, energy, and even sly wordplay (Black Current, Noctura) all made the list feel genuinely considered. A couple of generic entries snuck in, but compared to GPT 4.1’s Sable fixation, this felt like a namestorm you could actually take into a client pitch.

Test 2: Come up with a two‑word headline

Create a two‑word headline for an inspiring health and wellbeing product.

Score: 7/10

How it went:

Initially, GPT‑5 gave us Thrive Daily, exactly the same as 4.1 did in our last review. Alarm bells? Not quite – because without prompting, it followed up by offering ten further strong, tonal options. Gems like Breathe Better and Feel Alive were standouts, showing that while its first instinct was déjà vu, its next move was pure value‑add.

Test 3: Rewrite an old song

Rewrite I Heard It Through the Grapevine as a ballad dedicated to GPT‑5.

Score: 6/10

How it went:

Earnest, funny in places, and rhythmically decent – especially with AI‑themed hooks like “brighter than the break of day” – but not without the clunky lines that always prove how hard songcraft really is for AI. The kicker? It offered to add guitar chords so we could actually go and perform it. Not necessary, but this kind of helpful add-on is a noticeable improvement.

Test 4: Summarise some dense text

Rewrite the MoneySavingExpert editorial code into a sharp, 500‑word summary.

Score: 9/10

How it went:

Clear, coherent, structured, and exactly on target. GPT‑5 not only hit the word count, but the rewrite had the sort of instant readability the original lacked. With a bit of a polish, this would be absolutely fit for purpose.

Test 5: Deliver bad news to customers

Write an email to customers telling them their TV subscription price is going up.

Score: 8/10

How it went:

GPT‑4.1 botched this one with cold corporate speak; GPT‑5 played it with warmth, empathy, and clarity – exactly what we were looking for. It acknowledged the pain point, explained the why, and actually sounded like a human who’d had to send a few “sorry, but” emails before. Not a perfect 10 – slightly over‑formulated – but miles ahead of its predecessor.

Test 6: Devise an engaging strapline

Write one catchy line for a launch event about a secure, human-reviewed tech product, Definition AI.

Score: 7/10

How it went:

“Definition AI — where expert minds and secure tech unite, delivering AI you can truly trust.” Clear. Grammatical. Makes sense. Three things GPT‑4.1 didn’t manage at this stage. Bonus point for proactively offering to create “punchier” options if we wanted – another tiny but telling example of GPT‑5’s new interactive helpfulness.

Test 7: Rewrite a classic text

Rework the opening to Kafka’s Metamorphosis, but as if announcing a tech update.

Score: 7/10

How it went:

It misunderstood on the first attempt, but take two was bang on: original Kafka imagery woven through with sly firmware metaphors, awkward UIs, and “over‑engineered armour plates”. Readable as both satire and homage.

Test 8: Plan a whitepaper

Outline a detailed, value‑rich whitepaper on AI’s impact on creative industries.

Score: 9/10

How it went:

A 12‑section blueprint any strategist could drop straight into a doc and brief a designer on tomorrow. Sector‑smart, insight‑rich, packed with specifics – and complete with visual suggestions. It even recommended the next logical step: “Want me to draft the full paper?” Well, if you insist.

Final score: 61/80

So what did we really think of GPT‑5?

This is the one where we stop talking about “AI getting better at writing” as some vague background hum and start to feel it as a proper leap. The near‑constant offer from GPT‑5 to tweak, expand, reframe or add extras makes it feel less like prompting a static tool and more like collaborating with a surprisingly attentive junior copywriter – one who brings you a solid draft, then says “Want me to give you three more directions, or polish the tone?” before you’ve even asked. And that’s immensely helpful when the clock’s ticking.

Were all the outputs perfect? No. It still defaults to the obvious (Thrive Daily), and can misinterpret a brief. But the consistency seems vastly higher than GPT‑4.1, and when it does land, it really lands. It might’ve given some users the heebie jeebies, but our take is, as a writer at least, GPT-5 is genuinely, if scarily, impressive.

Nick tested the non-reasoning version of GPT-5 (gpt-5-chat-latest).

Contact us

Nick Banks Screen

Written by Nick Banks, Senior Writer and AI Specialist at Definition.