Safeguarding your business in the wake of the ChatGPT share breach

Safeguarding your business in the wake of the ChatGPT share breach

In today’s fast-paced digital landscape, businesses are increasingly leveraging Artificial Intelligence (AI) tools such as OpenAI’s ChatGPT to streamline operations.

However, recent developments surrounding the now-discontinued “share” feature of ChatGPT should serve as a critical reminder of the importance of robust data governance and proactive measures to safeguard sensitive information, such as personal data and confidential business information.

What happened?

OpenAI recently faced scrutiny after its “share” feature in ChatGPT appeared to inadvertently expose private conversations to public search engines such as Google. While the feature allowed users to share chat links, discrepancies in the user interface and terms across platforms (e.g., Web, iOS, Android) led to confusion over whether shared chats were private or publicly discoverable. Although OpenAI has since removed the feature and requested the removal of indexed links from search engines stating it was a “short-lived experiment”, researchers have alleged that over 100,000 conversations, many containing personal data, were archived and remain accessible in some instances.

At the time of writing, it is also reported that chats from X.com’s “Grok” platform have been exposed online, highlighting a common risk within the industry.

Why it matters to your business

This issue underscores the risks associated with using AI tools and highlights potential vulnerabilities that could expose sensitive company or client data. For businesses, the key takeaways are:

  • Personal data: Conversations shared through AI platforms may include personal data about your employees, customers or clients. There are several data protection compliance issues that must be considered prior to sharing personal data with AI platforms from meeting transparency requirements via privacy policies to carrying out supplier due diligence on your data processing agreements with AI platforms.
  • Confidential information: As with personal data, conversations can be shared through AI platforms about your internal strategy, or intellectual property. Once shared outside of your business, such information can be challenging to remove entirely.
  • Reputational damage: Data leaks can severely impact your brand’s reputation, erode client trust, and lead to loss of business.
  • Regulatory implications: Mishandling of sensitive data could result in non-compliance with data protection laws such as the UK GDPR, leading to fines and legal challenges. Such fines can be up to £17.5m or 4% of your annual turnover (whichever the greater).
  • Legal claims: Clients or other individuals whose data is exposed may bring legal claims for breach of contract, breach of confidence, privacy or their data protection rights, and complain to the data protection regulator. Some larger data breaches have also attracted attempts to start ‘class-action’ claims.

What should you do?

If your organisation uses AI tools such as OpenAI’s ChatGPT, now is the time to review and strengthen your policies and practices. Below are some actionable steps to consider:

1. Implement an AI usage policy

If you haven’t already, establish a clear AI usage policy within your organisation. This should cover:

  • Approved AI tools and platforms
  • Guidelines on the type of information that can be inputted into AI systems
  • Specific processes for sharing data generated by AI tools

2. Train employees

Educate employees on the risks of using AI tools and ensure they understand how to use these platforms responsibly. Emphasise the importance of avoiding inputting personal data or confidential data into AI systems.

3. Conduct data audits

Review your organisation’s use of AI tools to identify any potential exposure of data. If you suspect that data may have been shared via ChatGPT’s “Share” feature, investigate whether these links have been indexed and take immediate steps to request their removal.

4. Monitor evolving AI risks

AI technology evolves rapidly, and so do its associated risks. Stay updated on developments in the AI space, including how tools such as ChatGPT handle data and privacy.

5. Seek legal support

If your business is impacted by the ChatGPT share breach or similar issues, legal advice can help you assess your exposure, address potential liabilities, and implement stronger safeguards.

How we can help

We understand the complex intersection of technology, data, and the law. Our team of experts can assist you with:

  • Drafting and implementing AI usage policies tailored to your business
  • Conducting data audits to assess your organisation’s risk exposure
  • Advising on regulatory compliance and potential liabilities
  • Supporting you with incident response and remediation in the event of a data breach, regulatory involvement, and legal claims

If you have any questions about how the OpenAI ChatGPT share breach might affect your business or need assistance in implementing preventative measures, please don’t hesitate to contact one of our specialists.

AUTHORS

Michael Yates Partner

Michael is an information litigator who specialises in advising individuals and companies on reputation management, cyber crisis management and information, data privacy and media law disputes.

Michael is an information litigator who specialises in advising individuals and companies on reputation management, cyber crisis management and information, data privacy and media law disputes.

He covers the full spectrum of contentious matters, including in-print and online defamation, malicious falsehood, misuse of privacy information, breach of confidence, data protection, cyber attacks, data breaches, information theft, harassment, blackmail, right to be forgotten and subject access requests. He also advised on regulatory media and data complaints, reporting restrictions, NDAs, injunction applications, Norwich Pharmacal applications, online takedowns, apologies, damages claims and coroners' proceedings.

Michael often urgently advises clients who are in a crisis, typically when trying to protect reputation by stopping or mitigating the publication or broadcast of a false story, project managing a response to a cyber attack or preventing the unlawful misuse or disclose of information. He also provides regular training and preparedness sessions to clients to help get ahead of a media or cyber crisis.

He also protects publishers, platforms, data controllers and processors from legal claims.

Michael is ranked as ‘Up and coming’ in Chambers and Partners and is ‘Recommended’ by Spears Magazine.

Lizzie Williams Partner

Lizzie Williams is a partner and solicitor advocate specialising in commercial litigation.

Lizzie Williams is a partner and solicitor advocate specialising in commercial litigation.

Lizzie has a diverse commercial disputes practice and wide-ranging experience of litigation and arbitration including urgent injunctions, appeals and group litigation. Lizzie acts for a wide range of clients, from high net worth individuals to large corporates, including technology companies, established brands across a broad range of industries, public sector entities and startups.

Lizzie has particular expertise in commercial disputes with a technology angle. Lizzie advises on traditional IT disputes (involving hardware, software development, outsourcing and licensing) and disputes involving emerging technologies (including artificial intelligence, digital assets and blockchain). In addition, Lizzie advises on disputes arising out of cyber-attacks and online payment frauds, disputes involving investments into technology companies, disputes about technology procurement processes and the management and resetting of distressed digital transformation projects.

Lizzie is recognised as a "Key Lawyer" in Commercial Litigation and Artificial Intelligence in The Legal 500. Clients say Lizzie “is the best commercial litigator around” and praise her “calm, responsive and very creative approach delivered with considerable expertise”.

Lizzie is the author of the Practical Law practice note AI Disputes and Risk Mitigation and the book A Practical Guide to Smart Contracts and the Law and regularly speaks at industry events.

Lizzie graduated from the University of Cambridge with a First Class degree in Law in 2010 before training and qualifying at Herbert Smith Freehills, where she worked on a variety of complex litigation and arbitration matters for a number of years, before joining Harbottle & Lewis in 2017. Lizzie is a member of the Society for Computers & Law, the Tech Disputes Network, the Cyber Fraud and Asset Recovery Network and the Silicon Valley Arbitration & Mediation Center.

Hugo Tyrrell Senior Associate

Hugo Tyrrell advises clients on media and information law.

Hugo Tyrrell advises clients on media and information law.

His practice covers privacy, defamation, data protection, confidentiality and other information laws and litigation. He advises individuals and corporations on the publication of material in any format, whether by newspapers, online, social media or television and film.

Nadia Ahmed Associate

Nadia is an associate specialising in data protection, privacy and information law.

Nadia is an associate specialising in data protection, privacy and information law.

She advises on compliance with data protection laws and information laws, including the UK and EU General Data Protection Regulation (GDPR), the Data Protection Act 2018, the Freedom of Information Act (FOIA) and codes of practice issued by the ICO and other data protection regulators.

She assist clients with data protection agreements/addendums (DPA), data protection impact assessments (DPIA), drafting and reviewing privacy policies and cookies policies and cookie banners. Nadia handles contentious data protection matters too such as communications with the ICO, personal data breaches and data subject requests such as data subject access requests (DSAR). She keep clients informed of any changes to data protection laws and updated guidance from data protection regulators, and provides training to legal teams and employees on data protection best practices. Nadia has also been seconded to help ensure compliance with GDPR and information law procedures are effective and meet the necessary standards.

Nadia works with a wide range of clients, from small businesses to large corporations, to help them understand their legal obligations and develop data protection strategies and programmes for compliance with data protection laws. Such clients include those in the fashion and retail sector, streaming services, gaming, technology and more.

Nadia has completed the Certified Information Privacy Professionals/Europe (CIPP/E) by IAPP and is a member of the Society for Computers and Law.