Blog

Artificial Intelligence: When is an employer vicariously liable?

Written by Joe Hennessy | 14-Jan-2026 15:17:01

Artificial Intelligence (AI) is now a standard workplace tool. The scale of use is clear. In its Autumn 2025 Labour Market Outlook, the CIPD surveyed 2,019 UK employers and found employees are using AI tools at work in 76% of organisations. Use was higher in the public sector (87%) than the private sector (73%).

The bigger issue for employers is not the tech. It is control. The same CIPD report found 54% of employers say staff use free AI tools and 42% say staff use paid-for tools (including tools paid for by employees). In practice, that can mean staff using personal accounts and subscriptions outside the employer’s systems, settings, and audit trail.

This is “shadow AI”: work done with AI off the radar, without clear rules on what can be uploaded, without logging, and without consistent review standards. When it goes wrong, it can go wrong quickly and sometimes in public.

A recent example landed in January 2026, when West Midlands Police’s chief constable apologised to MPs after incorrect information linked to Microsoft Copilot was used in evidence connected to the decision to ban Maccabi Tel Aviv fans from attending a match. Different context, same lesson for employers: once AI output is treated as fact, the damage is done and the organisation ends up answering for the consequences.

How would liability attach?

AI is not a legal person. It cannot owe duties, form intentions, or be “employed”. Courts and tribunals are unlikely to treat “the AI tool” as the defendant. Instead, liability will usually attach in one or both of these ways:

  1. Direct liability. For example, where an employer chooses to deploy AI (such as a chatbot) without proper controls, training, or checking processes, or runs a process that relies on AI outputs without suitable oversight.
  2. Vicarious liability. Where someone working for the organisation uses AI in the course of their work and commits a wrong (for example, negligent misstatement, breach of confidence, or other client-facing wrongdoing). A common scenario is reliance on unverified AI output that is then sent to a client or third party, causing loss. The legal focus is the user’s conduct and how closely it connects to the work they were engaged to do, not the tool itself.

What is the legal test for vicarious liability?

The courts usually approach vicarious liability in two stages:

  1. Is the relationship one that can attract vicarious liability?

This is typically an employer/employee relationship, but it can extend to relationships “akin to employment”. That can arise where someone is sufficiently integrated into the business’s work and the business can direct how that work is done, even if there is no employment contract on paper.

  1. Was the wrongdoing sufficiently connected to the role?

In other words, was the wrong so closely linked to what the person was engaged to do that it is fair to treat it as done in the course of the role?

How this plays out with AI at work

If staff are using AI as part of their job, the risk is rarely “autonomous AI”. It is ordinary workplace activity happening faster, with more confidence, and sometimes with less checking.

Vicarious liability risk is most likely to arise where:

  • Client-facing communications are involved. Sales, account management, customer support, and legal or technical teams use AI to draft emails, advice-like responses, proposals, scope documents, and tenders.
  • AI output is treated as “good enough”. People copy and paste without proper review. Facts, figures, citations, assumptions, and tone are not checked, and hallucinated content is missed.
  • Personal data is mishandled. Uploading information into an AI platform is processing personal data. If the tool is not approved and configured, employers may have limited visibility over where data is stored, who can access it, and what security measures apply. That increases UK GDPR risk, including around security and international transfers.

What employers should take from the CIPD data

The CIPD figures point to a simple governance problem: AI use is widespread, but a lot of it is happening in places employers cannot easily see. Hybrid working makes that worse. People draft from home, on trains, or on personal mobiles and laptops, switching between work and personal accounts.

That is exactly how shadow AI grows: the work still gets done, but the business loses the audit trail and the ability to control what data goes in and what leaves. The ICO has flagged that “bring your own device” arrangements come with security risks and need careful thought in a homeworking context.

The practical bottom line

The vicarious liability risk here is not theoretical. If someone is doing their job, uses AI as part of that work, and a client or third party suffers loss, the employer is usually the obvious target.

In practice, you will not eliminate shadow AI. People will still use tools discreetly, particularly when working from home or on personal devices. The aim is risk reduction and early detection. Simply saying “we didn’t approve that tool” is unlikely to carry much weight if the employee was using it while carrying out their role.

What helps is evidence of control: clear rules, practical training, and systems that make unsafe use harder. ACAS has also advised employers to develop clear policies on AI use at work and consult workers and any representatives on its introduction.

How do you reduce it?

You cannot police every prompt. You can reduce the risk by making safe behaviour the default:

  • Make the compliant route the easiest route. If staff need AI to do their jobs, provide an approved tool that is accessible and quick, so people do not default to personal accounts.
  • Control the data, not the behaviour. Set a clear red line: no client confidential information and no personal data goes into unapproved tools. Train with specific examples from your business.
  • Build verification into the workflow. For client-facing work, require a human check as a mandatory step. If output can leave the business without review, the control is not real.
  • Tighten hybrid working and BYOD controls. Set minimum device and account standards (screen lock, encryption, updates, separation of work and personal where possible). The aim is not surveillance. It is preventing leakage and reducing accidental disclosure.
  • Use proportionate monitoring and logging. Log use on employer-managed systems and approved tools. Monitor for obvious indicators of unmanaged use on work devices.
  • Make disclosure expectations clear. Using AI is not the problem. Using it in a way that bypasses controls, hides use on high-risk outputs, or involves restricted data is the problem.
  • Run light-touch spot checks. For higher-risk teams (client service, sales, HR), sample outputs and ask what tools were used, whether data was entered, and who checked the output.

How Neathouse Partners Can Help

Shadow AI is not going away. The question is whether you find it through governance, or through a complaint. If you want to get ahead of it, speak to one of our advisers at Neathouse Partners about an AI risk review. We can help you put in place an AI policy, approved tools, and workable guardrails that fit how your people actually work.