OpenAI Faces Legal Battle: AI Chatbot ChatGPT Accused of Spreading Defamatory Information

The lawsuit emerges amid concerns over AI’s capability to generate false information and highlights the necessity for stricter regulations.

In a landmark legal case, OpenAI, the creator of the widely-used AI chatbot ChatGPT, is being sued for defamation by an individual who alleges that the chatbot disseminated erroneous and damaging information about him. This case sheds light on the broader issue of the accuracy and reliability of information provided by artificial intelligence systems, and the implications they can have on individuals' reputations.

Mark Walters has filed a lawsuit against OpenAI, claiming that ChatGPT falsely identified him as being involved in an ongoing criminal case. The AI reportedly named him as the chief financial officer of the Second Amendment Foundation (SAF), a pro-gun group in Washington State, and accused him of defrauding the foundation and embezzling money.

The lawsuit highlights a conversation with ChatGPT in which it claimed that the criminal case was filed by Alan Gottlieb, the founder of SAF, against Mark Walters. The chatbot further alleged that Walters had misappropriated funds for personal expenses, manipulated financial records to conceal his activities, and failed to provide accurate financial reports to the SAF’s leadership.

Walters asserts that the information provided by ChatGPT is entirely false. He is neither a plaintiff nor a defendant in the case mentioned, and he contends that every statement related to him in the summary is incorrect. Legal documents of the case in question do not mention Walters' name or anything related to him. Moreover, he resides in Georgia, far removed from Washington State where the case is based.

Walters' legal representation argues that ChatGPT’s allegations were “false and malicious,” and have caused harm to Walters' reputation. They claim that the AI's response was a fabrication that has exposed Walters to public contempt and ridicule.

OpenAI has acknowledged that ChatGPT, like other AI models, has a tendency to generate false or "hallucinated" information. The company has faced increasing scrutiny and criticism regarding ChatGPT’s ability to provide false information, and its questionable privacy practices. Notably, Italy had previously banned the chatbot for almost a month over privacy issues.

This lawsuit comes at a time when ChatGPT’s popularity is skyrocketing, with user numbers expected to rise from 100 million in January to over 800 million this month. It also coincides with OpenAI’s CEO, Sam Altman, meeting with the US Congress to discuss the need for regulation concerning the development of AI.

Altman expressed his concerns over AI’s abilities to manipulate and persuade, potentially spreading disinformation. He expressed particular concern regarding AI’s capacity to influence elections.

As AI continues to permeate various aspects of daily life, this case serves as a reminder of the challenges and responsibilities that come with its use. It underscores the importance of ensuring accuracy and reliability in AI systems, and the need for regulations to address these emerging concerns.