AI Hallucinations: When Bots Start Making Things Up

SPREAD THE WORD

BA WORRIED ABOUT 5G FB BANNER 728X90

As AI-powered tools become more common in our lives, they often produce inaccurate information, leading to concerns about their reliability

AI-powered tools have captivated us with their ability to provide seemingly authoritative and human-like responses to a wide range of queries.

Whether it’s for homework assistance, workplace research, or health-related inquiries, these AI models, like ChatGPT, are rapidly becoming indispensable in our daily lives.

However, a significant issue is emerging: AI models frequently fabricate information, a problem that researchers have aptly dubbed “hallucinations.”

In the words of Meta’s AI chief, these AI-generated inaccuracies can also be likened to “confabulations,” while some individuals on social media have even gone so far as to label chatbots as “pathological liars.”

Yet, this tendency to attribute human-like traits to machines, according to Suresh Venkatasubramanian, a professor at Brown University and co-author of the White House’s Blueprint for an AI Bill of Rights, is the root cause of these issues.

In reality, Venkatasubramanian explains, large language models, the technology that underpins AI tools like ChatGPT, are trained to “produce a plausible sounding answer” to user queries.

“So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he clarifies.

“There is no knowledge of truth there.”

Venkatasubramanian believes that a more accurate analogy for these computer-generated responses is comparing them to the imaginative storytelling of a four-year-old child.

“You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian illustrates.

“And he would just go on and on.”

While the companies behind AI chatbots have implemented some safeguards to curb these hallucinations, there remains significant debate in the field about whether they are a problem that can be definitively solved.

What Exactly Is an AI Hallucination?

To put it simply, an AI hallucination occurs when an AI model “starts to make up stuff — stuff that is not in line with reality,” as described by Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

“But it does it with pure confidence,” West adds, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?'”

This means that users may struggle to distinguish between true and false information when querying a chatbot about unfamiliar topics, according to West.

There have already been several high-profile instances of AI hallucinations.

For example, when Google unveiled Bard, its much-anticipated competitor to ChatGPT, the tool provided an incorrect response to a question about new discoveries by the James Webb Space Telescope.

A New York lawyer faced scrutiny when he used ChatGPT for legal research and submitted a brief that included six “bogus” cases that the chatbot seemingly fabricated.

Even news outlet CNET had to issue corrections after an article generated by an AI tool provided wildly inaccurate financial advice when asked to explain compound interest.

The risks associated with hallucinations are particularly concerning when people rely on this technology for health-related, voting, or other sensitive matters, warns West.

Venkatasubramanian further cautions against using these tools for tasks requiring factual and reliable information that cannot be immediately verified.

Can Hallucinations Be Prevented?

The prevention or resolution of AI hallucinations is a complex, ongoing research challenge, according to Venkatasubramanian.

Large language models are trained on enormous datasets, and the process of generating AI responses involves both automatic and human-influenced stages.

“These models are so complex and intricate,” Venkatasubramanian notes, “but because of this complexity, they’re also very fragile.”

Even minor changes in inputs can lead to “dramatic changes in output.”

Jevin West of the University of Washington shares this view, stating, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

Companies like Google and OpenAI, responsible for Bard and ChatGPT, respectively, aim to be transparent with users about the potential for inaccurate responses.

Both companies acknowledge the issue and are actively working on solutions.

Google’s CEO, Sundar Pichai, has admitted that “no one in the field has yet solved the hallucination problems” and believes it’s a matter of intense debate.

Sam Altman, CEO of OpenAI, has predicted that it may take some time to significantly improve the hallucination problem, emphasizing the balance between creativity and accuracy in AI models.

In the end, while AI-powered chatbots offer immense potential, their susceptibility to hallucinations remains a challenge.

As these technologies continue to rapidly evolve, finding a delicate balance between creativity and accuracy will be crucial in ensuring their trustworthiness in a wide range of applications.


Read the original story here:

CNN

Author

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

BA WORRIED ABOUT 5G FB BANNER 728X90

SPREAD THE WORD

Leave a Reply

Your email address will not be published. Required fields are marked *

How To Turn Your Savings Into Gold!

* Clicking the button will open a new tab

FREE Guide Reveals

Get Our

Patriot Updates

Delivered To Your

Inbox Daily

  • Real Patriot News 
  • Getting Off The Grid
  • Natural Remedies & More!

Enter your email below:

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

15585

Want To Get The NEWEST Updates First?

Subscribe now to receive updates and exclusive content—enter your email below... it's free!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.