Blog

A guide to 'deepfakes' and how they can impact your business

Written by James Rowland | Dec 4, 2023 5:04:23 PM

In 2017, deepfakes first appeared on the internet, so-called because they are generated using a deep learning method known as generative adversarial networks (GANs). In early 2023, the president of Microsoft, Brad Smith, said that deepfakes were 'the most concerning aspect about the development of artificial intelligence'.

Although the images and videos generated this way aren't real, they are already sophisticated enough to trick viewers. Most recently, TV finance expert Martin Lewis was depicted in a deepfake video as part of a scam, recommending that viewers buy an app.

This guide will explore what deepfakes are, the impact they can have on your business, and steps you can take to protect your business interests, staff and security from them.

 

What are deepfakes?

Deep learning algorithms, a subfield of AI, can be used to create realistic images or videos that look real enough to be deceptive. “Deepfake” comes from the pairing of the terms “deep learning” and “fake”.

Using these algorithms, existing videos or images can be analysed and then manipulated. One person's likeness can be inserted into existing media, making it look as though they have said or carried out actions they never did.

Voice patterns, facial expressions and the smallest nuance of a person's behaviour or appearance can be learned and replicated using deepfake technology. Part of the concern over the accelerating adoption of AI is that this technology could be used to generate fake news or spread misinformation by forging footage of public figures carrying out false activities or saying false statements.

The legal, ethical and societal impact of deepfakes is coming under increased attention, and effort is being put into methods of identifying the use of AI in these cases, such as the BBC's Verify department.

 

How are deepfakes a threat to business?

As most deepfakes are created to mislead viewers into believing or acting on false information, they can pose a serious threat to companies and organisations. In all cases, the threat comes from viewers' inability to tell the difference between what is real and what has been 'faked'.

Credible-looking phishing emails sent to a company's employees are a common infiltration attempt by cyber criminals. Deepfakes potentially pose even more of a threat, in that they could easily convince employees that they are speaking with a manager or CEO of their company.

Employees themselves could also be blackmailed using deepfake footage or images in return for passwords, money, or internal information. The rise of remote working has made protecting against this more difficult, as isolated staff are more susceptible.

At a time when video and voice messaging are becoming more commonplace, companies and their employees should be more aware of how deepfake technology and AI are being used by cyber criminals to attack multiple industries.

Improved authentication

This is one business response to deepfake technology that could combat the issue. Companies could also educate their employees more thoroughly about how information is supplied to them internally, and what they should be suspicious of.

We are already in a time in which employees cannot fully trust what they see and hear on a video call, especially as poor quality video can be used to further disguise forgeries. Director of strategy and standards at SailPoint, Mike Kiser, says that improved digital identification is required, like cryptographically signed proof that is unlocked using biometrics.

He also maintains that training on spotting deepfakes is necessary, along with any protective technology to be easy and intuitive to use. One possibility is as simple as an employee calling the person in question should they have any suspicions.

Given that deep learning technology can be used to replicate a person's voice, face or eyes, there are worrying implications as to the impact on biometric security, such as voice or face recognition systems. The CTO of Sectigo, Nick France, says that biometric authentication technology used by businesses 'now faces significant threats from a huge increase in convincing deepfakes'.

Greater awareness of deepfakes, and how they can pose a threat to authentication and cyber security, is necessary for both companies and their employees if they are to combat them as a threat to business.


Deepfakes and the law

Advancements in AI technology are far outstripping the ability of current laws to keep up, and the legal landscape in England and the UK is unresolved. England and Wales will be passing the Online Safety Bill, which will outlaw the sharing of deepfake pornography, but does not cover other types of content created with AI and without the subject's consent.

Explicit content without consent, false narratives and inflammatory statements are all reputational risks that deepfakes can expose people to. The prevalence of social media platforms also means that such content can be spread rapidly and largely without safeguards.

Although the law is not currently keeping up with the rise of deepfakes, there are some options in English law for victims of the technology, including:

  • Defamation
  • Privacy and harassment
  • Data protection
  • Intellectual property
  • Criminal law

Authenticating real content and detecting manipulated content are solutions that can, in turn, be provided by AI. Blockchain technology, used for transparent and secure digital transactions, could also be used to help certify genuine content.

 

Your business response to deepfake technology

There are a number of proactive measures HR managers can take to make sure their business is prepared for deepfakes and their possible impact on employees.

 

Employee training and education

Employees should be aware of what deepfakes are and the risks involved. Extra emphasis should be placed on exercising caution when sharing personal information or sensitive details online.

 

Cybersecurity

Guidelines for how to recognise and report suspicious content should be made part of a robust cybersecurity policy. Employees should be prompted to create unique, strong passwords and use two-factor authentication where possible. Deepfake detection technologies and tools can be used to help identify content that may have been manipulated. If investing in these tools, make sure they are tested and updated regularly.

 

Confidential information

Communications that involve sensitive data, such as those containing confidential information or involving financial transactions, should have multi-step verification procedures implemented. Employees should also be instructed to verify the identity of people they are communicating with, whether on video calls or any other kind of electronic communication. Sensitive information should be protected by using communication channels that are encrypted and secure. Easily accessible platforms should not be used to share sensitive details.

 

Social media policy

A robust policy for social media use by employees should clearly emphasise the potential risks of sharing work-related and personal information. Employees should also be aware of how to verify any information that comes through social media channels.

 

Reviews and auditing

Should any deepfake attacks occur, there should be a thorough incident response plan, including a clear communication chain and protocols for reporting and dealing with any threats.

A legal review specifically focusing on deepfake attacks and legal recourse options can be carried out with legal experts. Employee contracts and company policies should contain provisions that cover digital content and technology misuse.

Cybersecurity concerns can be addressed in part by implementing collaboration between IT teams and HR. Any security measures put in place should be reviewed and updated regularly to keep up with AI evolution.

Continuous monitoring of communications and digital traffic can help with detecting unusual or suspicious activities and patterns that could indicate deepfake attacks.

 

Get expert advice

Get support and advice from our team of professionals at Neathouse Partners if you have questions about deepfakes and their impact on your business. Our team of HR consultants and employment lawyers can help you navigate the implementation of a deepfake response policy.

Call 0333 041 1094 today or use our contact form.