Consultant Abigail Healey examines the defamation case against ChatGPT in The Times



April 13, 2023

Consultant Abigail Healey discusses how the threat of litigation against ChatGPT may well open the floodgates for a deluge of similar cases.

 

Abigail’s article was published in The Times, 13 April 2023, and can be found here.

In the same week that big tech leaders called for an urgent halt to ‘dangerous’ artificial intelligence (AI) development which they fear could take over the world, the developers of ChatGPT faced a more pressing legal issue. OpenAI, the developers of the project which has rocketed to global prominence since its November launch, has been threatened with its first defamation claim after the chatbot falsely claimed a Melbourne mayor had served time in prison for his role in a bribery scandal.

This legal development was almost inevitable, given that ChatGPT is dependent on the internet to procure information, with no filtering of the content it harvests to produce its text. Given the permanence of information stored on the web, Mayor Brian Hood was not content to follow the Duke of Wellington’s “publish and be damned” approach. Instead, his lawyers sent a letter to OpenAI stating that he would sue in defamation should they not correct the offending statements.

That ChatGPT managed to become so seriously confused when writing its text shows the limitations of relying on automation to produce copy, and this is not the first time it has got its facts badly wrong.

Last week, an American professor said that he had been falsely accused by ChatGPT of sexually assaulting students, despite never having worked at the institution the bot claimed he taught at. Earlier this month, Italy’s data protection regulator banned ChatGPT, citing amongst other concerns false information supplied by the bot.

ChatGPT churns out content unchecked and untrammelled at breakneck speed, whilst seeking to hide behind the statement that it can produce output that is “inaccurate, untruthful, and otherwise misleading”.  As chatbots become more widespread and accessible, we are likely to see an increase in litigation involving AI. Given the novelty of technology such as ChatGPT, it will be a test as to whether existing law remains fit for purpose (a government White Paper on AI was published at the end of March).

When (not if) the first defamation claim comes before the English courts, one of the issues that is likely to be considered again in light of this new tech is liability for content that is not edited or monitored.  However, once the AI developer is put on notice of defamatory content on its platform and chooses not to act, it is difficult to see how potential liability can be avoided.

This first threat of litigation may well open the floodgates for a deluge of similar cases.  OpenAI will already be weighing up the legal risks attached to their unleashing of ChatGPT, as concerns grow that “publish and be sued” will be the response each time the robot oversteps the mark.