Georgia Man Now Sues ChatGPT For Calling Him a Thief

Man sues ChatGPT for false claims

Employees have had a fair share of fear that ChatGPT will render workers jobless. However, no one has proved to have iron balls enough to take legal action against OpenAI. For Mark Walter – a radio host in Georgia, ChatGPT cannot go unpunished for claims that he embezzled funds from a gun rights nonprofit organization.  In his submission, ChatGPT not only damaged his reputation in a case he was not involved in but also fabricated false passages!

When ChatGPT was prompted with a query about that real court case, the tool provided completely fictional passages claiming that the Second Amendment Foundation’s founder sued Walters for fraud and embezzlement of funds.

Mark who works for Armed America Radio is a clean man who has never worked in the nonprofit organization nor been involved in such lawsuits.

ChatGPT Generates False Claims

This is not the first-time generative AI has put humans in an awkward situation. In April, A Chines man from Northwestern Gansu province spent nights behind bars for generating false news about a train crash killing 9 construction workers. The post received more than 15,00 clicks on social media on 25th April 2023.

Similarly, a New York attorney must fight for his freedom before Judge P. Kevin Castel after ChatGPT deceived him with fictitious legal research outcomes. The lawyer admitted that the sources he used for research were unreliable. While the case awaits determination in a few weeks, these occurrences show that sometimes AI can go Quite Wrong.

Why Does ChatGPT Give False Claims?

Limited training data: While the tool has been trained on a vast amount of text from the internet, the training is limited to different aspects.

Ambiguity or interpretation: Language can be ambiguous, and sometimes the context or intent of a question or statement can be unclear. As a result, the tool may generate a response based on one possible interpretation, which might not align with the user’s intended meaning.

Errors in the training data: The training data used to train models like ChatGPT are collected from various sources, and there is a possibility of containing inaccuracies or biases. This can lead to the model generating responses that are not entirely accurate or might reflect those biases.

Lack of fact-checking: As an AI model, ChatGPT does not have real-time access to current information or the ability to fact-check responses.

Hallucination. Hallucination in AI refers to situations where an artificial intelligence system generates outputs or information that is not based on actual data or reality. It occurs when the AI model produces content that goes beyond its training data or invents information that is not factual.

While it’s difficult to predict the outcomes of the lawsuit, there are reasons to smile about its possible implications in terms of AI liability and regulation, ethical considerations of AI, and the legal status of AI. Therefore, there is good news for employees in the legal profession who thought they’ll soon lose their jobs to the AI model. These lifeless bots can be smarter than humans, but they still have a long way to win our trust on a witness stand.


Aparajeeeta Das
We will be happy to hear your thoughts

Leave a reply

Disclaimer: The content provided herein is for informational purposes only, and we make every effort to ensure accuracy and legitimacy. However, we cannot guarantee the validity of external sources linked or referenced. If you believe your copyrighted content has been used without authorization, please contact us promptly for resolution. We do not endorse views expressed in external content and disclaim liability for any potential damages or losses resulting from their use. By accessing this platform, you agree to comply with copyright laws and accept this disclaimer's terms and conditions.

@2023 InstaMart.AI Inc. All rights reserved.

Artificial Intelligence | Daily AI News, How Tos and AI & Data Services