Your reputation and AI

Your reputation and AI

The development of what AI is able to achieve shows no sign of slowing down. With its capabilities only increasing, various issues arise with regard to AI’s ability to affect an individual’s or corporation’s reputation.

The use of “deepfake” images and videos of well-known individuals is becoming increasingly common, but it is not only those who operate in the public domain who might be affected. Deepfakes are curated photographs or videos which have been manipulated and digitally altered to create a convincing likeness which does not exist originally. They are a synthetic creation of the individual’s appearance and/or voice. Any images on public social media sites or company websites can be manipulated, as can extracts of your voice from videos – making viewers believe what they are seeing is really you. Not only are there sites where deepfakes can be made for free, but the possibilities of what these images and videos can be used for are of course endless.

Taylor Swift is one of the most recent celebrities to have been affected by deepfakes, joining the likes of Tom Hanks at the end of last year. Whilst Tom Hanks’ image was used to promote a dental plan, AI images of Taylor Swift show a series of explicit scenarios surrounding the Kansas City Chiefs (her boyfriend’s sports team).

Examples like this are not only a disturbing intrusion into individuals’ private lives, but also reveal the potential horrors of deepfakes exacerbated by the current lack of regulation in the United Kingdom. While they generally ‘only’ circulate on social media sites, such as X, they are in reality widely accessible online and demonstrate the ease with which individuals’ private lives and reputations can be adversely altered and affected by AI.

In the United Kingdom, not all deepfakes are immediately illegal. Whilst they may infringe copyright and data protection law and have the potential to be defamatory, the specific ability to protect individuals operating in public industries from deepfakes in England & Wales is not always clear. Laws are often not enacted quickly enough to keep up with corresponding developments in technology. Whilst the Online Safety Act creates a new criminal offence of sharing deepfake pornography, it does not restrict any other type of synthetically generated content made without the subject’s consent. Individuals are therefore currently at risk of being defamed, embarrassed or publicly censored for content they did not consent to.

In these circumstances, it is important for relevant individuals to understand and be able to rely on the various recourses available under English law. If you require further advice, or specific legal advice on topics such as invasions of privacy or harassment online, please do not hesitate to contact a member of our team.

Recent posts

Previous
Next
AI Report
Read more
Baby Reindeer, internet sleuths and the perils of jigsaw identification
Read more
What businesses should consider before implementing monitoring
Read more
'Consent or pay’: the EDPB’s two cents on the right model
Read more
Take note: new guidance on the ICO’s penalties and fines
Read more
Labour’s proposed secondary ticketing reforms
Read more
The abolition of non-domicile in the Spring Budget
Read more
Content moderation: the ICO's guide
Read more
The Government moves to address unlawful immigration exemption under the Data Protection Act 2018
Read more
How can I get probate to sell my property?
Read more

More from this author

Previous
Next
The Online Safety Act is Finally Here
Read more
Ofcom publishes first code of practice in relation to the Online Safety Act
Read more

Share this page