Artificial Intelligence (AI) is now a standard workplace tool. The scale of use is clear. In its Autumn 2025 Labour Market Outlook, the CIPD surveyed 2,019 UK employers and found employees are using AI tools at work in 76% of organisations. Use was higher in the public sector (87%) than the private sector (73%).
The bigger issue for employers is not the tech. It is control. The same CIPD report found 54% of employers say staff use free AI tools and 42% say staff use paid-for tools (including tools paid for by employees). In practice, that can mean staff using personal accounts and subscriptions outside the employer’s systems, settings, and audit trail.
This is “shadow AI”: work done with AI off the radar, without clear rules on what can be uploaded, without logging, and without consistent review standards. When it goes wrong, it can go wrong quickly and sometimes in public.
A recent example landed in January 2026, when West Midlands Police’s chief constable apologised to MPs after incorrect information linked to Microsoft Copilot was used in evidence connected to the decision to ban Maccabi Tel Aviv fans from attending a match. Different context, same lesson for employers: once AI output is treated as fact, the damage is done and the organisation ends up answering for the consequences.
AI is not a legal person. It cannot owe duties, form intentions, or be “employed”. Courts and tribunals are unlikely to treat “the AI tool” as the defendant. Instead, liability will usually attach in one or both of these ways:
The courts usually approach vicarious liability in two stages:
This is typically an employer/employee relationship, but it can extend to relationships “akin to employment”. That can arise where someone is sufficiently integrated into the business’s work and the business can direct how that work is done, even if there is no employment contract on paper.
In other words, was the wrong so closely linked to what the person was engaged to do that it is fair to treat it as done in the course of the role?
If staff are using AI as part of their job, the risk is rarely “autonomous AI”. It is ordinary workplace activity happening faster, with more confidence, and sometimes with less checking.
Vicarious liability risk is most likely to arise where:
The CIPD figures point to a simple governance problem: AI use is widespread, but a lot of it is happening in places employers cannot easily see. Hybrid working makes that worse. People draft from home, on trains, or on personal mobiles and laptops, switching between work and personal accounts.
That is exactly how shadow AI grows: the work still gets done, but the business loses the audit trail and the ability to control what data goes in and what leaves. The ICO has flagged that “bring your own device” arrangements come with security risks and need careful thought in a homeworking context.
The vicarious liability risk here is not theoretical. If someone is doing their job, uses AI as part of that work, and a client or third party suffers loss, the employer is usually the obvious target.
In practice, you will not eliminate shadow AI. People will still use tools discreetly, particularly when working from home or on personal devices. The aim is risk reduction and early detection. Simply saying “we didn’t approve that tool” is unlikely to carry much weight if the employee was using it while carrying out their role.
What helps is evidence of control: clear rules, practical training, and systems that make unsafe use harder. ACAS has also advised employers to develop clear policies on AI use at work and consult workers and any representatives on its introduction.
You cannot police every prompt. You can reduce the risk by making safe behaviour the default:
Shadow AI is not going away. The question is whether you find it through governance, or through a complaint. If you want to get ahead of it, speak to one of our advisers at Neathouse Partners about an AI risk review. We can help you put in place an AI policy, approved tools, and workable guardrails that fit how your people actually work.