If you’d like to chat to us about AI ethics, our Definition AI suite, AI training, or anything else AI-shaped, just give us a shout at ai@thisisdefinition.com. 

Some ironies come gift-wrapped.

When The Chicago Sun-Times recently published its suggested summer reading list of 15 works of fiction, the paper had no idea that the article was itself mostly fiction. In fact, only five of the titles suggested actually existed. The rest? Produced – nay, hallucinated – by an AI being used by a freelancer who didn’t bother to fact-check it. And nor did the paper.

But perhaps more telling was the reaction. The story quickly went viral, prompting responses that ranged from amused  (“Take that, AI!”) to resigned (“Is nothing fact-checked anymore? Are humans already out of the loop?”). It seemed like people had been waiting for something like this to happen precisely so they could react in the way they did.

Inevitably, it threw up dozens of questions: should someone really be using AI to produce a newspaper article? Should they have informed the paper that they were doing this? If so, when – before they’d even prompted the AI, or after they’d handed in the finished piece? Was it the writer’s responsibility to make sure the facts were correct, or the paper’s, or even, more broadly, is it the AI company’s? Realistically, who is to blame in situations when AI suddenly hallucinates?

And it was those questions that told us what really lay at the heart of this story: not summer reading lists, freelancers’ working practices or a particular American newspaper’s AI policy. Rather more grandly, it was all about the ethics of AI.

Questions and answers

All businesses currently using AI are grappling with AI ethics, whether they know it or not. Variations of the same questions raised after The Chicago Sun-Times incident are being asked all the time in workspaces and boardrooms all over the world.

At an individual level they might be something like, “Should I be using AI for this job or that job?” or “Is it OK to publish AI-produced content without thoroughly checking it first?”; while at executive level they’re more likely to be along the lines of, “Should we invest in AI systems?”, “How will our AI strategy impact our workforce?”, or “Will using AI affect our brand?”

All ethical considerations. And as a result, answering these questions is confusing and fiddly. Unnerving even. Right now, with AI, we’re all essentially taking part in a collective experiment in real time, writing the guidebook ourselves as we go along.

This isn’t an abstract philosophical debate. With brand reputation, employee wellbeing and a million other knotty things to think about, it’s something every AI-using company needs to be on top of, today.

Five key takeaways

Fortunately, there are plenty of resources out there on the topic.

I recently completed an Ethics of AI course at LSE, which covered many of the most pressing areas that businesses need to consider, such as fairness, accountability and transparency (FAT), explainability (XAI), alignment and algorithmic biases.

Here are five key takeaways from the course that apply to any business using AI in its day-to-day.

1. Sort your AI governance for your business today

Waiting for an ethical misstep to happen before thinking about AI ethics is a bit like waiting for a data breach before investing in cybersecurity. The potential costs – financial and reputational – are simply too high. Instead, create clear lines of responsibility, build ethical review boards, and set out processes for impact assessment – and do it now.

2. Watch out for algorithmic biases

Large Language Models train on huge amounts of data from various online sources – so naturally they can return biased outputs. The ethical lesson here is to remind your employees to think critically about outputs. If we trust AI at face value, we risk perpetuating these biases in our content and the AI-influenced decisions we take.

3. Ask yourself tough questions

There’s a rush to adopt AI systems right now – and understandably so. But even if you’re forging ahead with AI integration, you should be asking questions all the time:

  • Are there jobs we don’t want to use AI for?
  • Do our employees know the best ways to get fair and useful answers from AI?
  • Will using AI impact the way we’re seen by others?

4. Think about the human impact of AI

By asking hard questions about AI adoption, you’ll almost certainly arrive at perhaps the hardest question of all: what impact will using AI have on your staff?

We strongly believe that AI should be used to augment human skills rather than replace them. If your company believes the same, make sure you clearly spell this out in your AI guidelines. 

5. Be transparent

Companies are at different stages of AI adoption right now. This can leave people wondering if your content or decisions are powered by AI or not.

So be open and transparent about how you use AI. Your customers and clients will appreciate the honesty – and trust you more for it.

At Definition, we think long and hard about AI about how companies can use it most effectively, and how to build ethical frameworks for integration.

If you’d like to chat about AI ethics, our Definition AI suite, AI training, or anything else AI-related, drop us a line at ai@thisisdefinition.com.

Nick Banks Screen

Written by Senior Writer and AI Consultant Nick Banks at Definition, who recently completed LSE’s Ethics of AI and loves talking to companies about the ethical side of AI adoption.