Banning AI at work isn’t working.

For employers concerned about employees using AI at work, the best solution is to address the risks with training and technology. A “just say no” approach will cause more problems than it solves.

AI tools, their capabilities and people’s attitudes to them are all evolving fast. It makes it hard to keep track of whether employees might be using these tools, what they’re using them for, and whether feeding them with company data poses a problem.

We surveyed 1,000 employed adults in the UK to get a snapshot of who’s using AI at work – and whether or not they have their employer’s blessing.

(In case you’re interested, we also asked how they’re using AI outside of work.)

60% of employees regularly use AI

60.2% of the employed adults in our survey said they use AI at least once a month, and another 23.3% use it at least once every few days.

And more than half (54.15%) of these regular AI users are using it at work.

People are using AI to support a wide variety of workplace tasks, from simple to complex, from technical to creative.

Many of those tasks, like writing code, analysing data or even composing emails, could involve prompting the AI with confidential or proprietary data.

And our survey confirms that some employees do share sensitive information with AI tools – including data about themselves, their colleagues and candidates, and the company’s performance.

Most worryingly, 17.9% said they had shared customer complaints and 6.6% had shared patients’ medical history: both types of data companies have a legal responsibility to protect.

Naturally, that’s got employers worried.

1 in 4 UK workplaces has cracked down on AI

Information you put into publicly available AI tools, like OpenAI’s ChatGPT bot, isn’t private. With anything you enter into the free version of ChatGPT, there’s a possibility it could reproduce it in a response to someone else outside your company.

Whether employers know or only suspect this, they’re making moves to address the risk. Despite the relatively recent surge in AI’s popularity and the pace of change in the field, a quarter (25%) of British businesses are up and running with active workplace AI policies, ranging from restrictions and regulations to outright bans.

A much higher number still, is considering a ban.

The top reason employers give their employees for banning AI at work is that they’re concerned about where their data might be going.

But to some employees, the benefits of using AI outweigh the risks.

To some employees, productivity trumps privacy

Employees need to understand the risks to accurately gauge whether they outweigh the benefits.

But most employees who use AI at work either wrongly believe the information they enter is confidential, or don’t know whether it’s confidential or not. More than half also either don’t know or have the wrong idea about who legally owns the content produced by AI.

It’s clearly not enough for employers to simply say AI is banned because of concerns about privacy. An employee who thinks the data stays confidential will feel justified in ignoring a ban.

They might even justify ignoring the ban as something they’re doing for the company’s benefit. Among the employees who told our survey they still use AI at work despite a ban, most said they do it because AI makes them a better, more productive employee. And, the data is there to back up this thinking: from tasks being finished more quickly, to the quality of work increasing.

Luke Budka, AI Director at Definition says: “We’re seeing a rise in ‘shadow AI’ in the workplace – workers using genAI tools even though their employers have told them they should not. It’s important users are aware that often everything they enter – whether it’s detail from their private lives or confidential company data – is used to train the AIs and may be regurgitated to other users in future.”

“Business leaders should assume their employees are actively using AI and put measures in place to ensure secure access. And it’s not just ChatGPT they need to be wary of – the new free version of Microsoft Copilot also gives the tech giant the right to use anything anyone enters or outputs in any way they see fit. If you carefully read the T&Cs you realise you’re giving Microsoft, its affiliated companies and third-party partners, permission to ‘copy, distribute, transmit, publicly display, publicly perform, reproduce, edit, translate and reformat’ the content you provide – won’t that be nice when your real life work drama is ‘publicly performed’ or your company’s confidential HR data is ‘publicly displayed’. A serious education piece regards tool usage needs to take place in the workplace and broader society, much like the way we now teach children in school how to safely use social media.” 

How do you solve a lack of understanding?

The answers lie in technology and training.

Providing a private AI environment solves the data privacy problem at a stroke. Unlike with a public bot like the free version of ChatGPT, information entered into a private environment really does stay within your walls.

Training your teams to understand what AI is and how it works can not only reduce the risks – it can harness the potential that employees know is there. Without understanding, enthusiasm for AI is a risk to the company. With quality training, that same enthusiasm can work for you.

Does your company restrict or ban AI?

Worried your company is falling behind the AI curve? We can help.

We built our own private AI environment and we can help you do the same. And AI is a highlight of our training menu. So give us a shout and let’s talk about your next step.