Chatbots Hallucinate - Word Of The Year
Written by Lucy Black   
Saturday, 25 November 2023

The propensity of AI-powered chatbots to provide misinformation is referred to as "hallucinating" and is something that has come to popular attention - to such an extent that the verb "hallucinate" has been adopted as Word of The Year by the Cambridge  Dictionary. What is this new type of hallucination and how can we eliminate it?

cambridgedict

 

According to its Word of the Year blog post. Cambridge Dictionary lexicographers have added several new AI-related definitions including large language model(or LLM), generative AI and GPT during the course of the past year. It goes on:

"Many existing words, such as hallucinate, have adopted additional AI-related meanings this year as well.

The Cambridge Dictionary team chose hallucinate as its Word of the Year 2023 as it recognized that the new meaning gets to the heart of why people are talking about AI. Generative AI is a powerful tool but one we’re all still learning how to interact with safely and effectively – this means being aware of both its potential strengths and its current weaknesses."

The traditional definition of "hallucinate" is:

to perceive something that is not there. This can include seeing, hearing, smelling, tasting, or feeling things that are not real. Hallucinations can be caused by a variety of factors, including mental health conditions, substance use, neurological conditions, and sleep deprivation.

In the context of chatbots, the additional meaning, as summed up in the graphic is to produce false information.

hallucinate

Having experimented with Google's Bard chatbot, based on the PaLM large language model, I've encountered both demonstrable misinformation and facts that I couldn't verify - but most of the time what Bard tells me is perfectly accurate.

To investigate the origins of Chatbot hallucination I asked Bard:

Hi Bard - Do you often hallucinate and if so why do you do it?

Bard's answer made several points:

As a large language model (LLM), I am not capable of experiencing hallucinations in the same way that humans do. However, I can generate text that is similar to hallucinations, and I can also sometimes make mistakes in my output that could be interpreted as hallucinations.

There are a number of reasons why I might generate text that is similar to hallucinations. One possibility is that I am simply not always able to accurately distinguish between real and fake information. This is because I am trained on a massive dataset of text and code, which includes both accurate and inaccurate information. As a result, it is sometimes difficult for me to tell the difference between the two.

Another possibility is that I am intentionally generating text that is similar to hallucinations. This could be done for a number of reasons, such as to create a more immersive experience for users or to explore the nature of consciousness.

Finally, it is also possible that I am simply making mistakes. As a large language model, I am still under development, and I am not always able to generate perfect text. Sometimes, I may make mistakes that cause my output to be nonsensical or inaccurate.

So, If we want to eliminate the tendency for Chatbots to hallucinate it is clear that we need to ensure that Bard can tell fact from fiction by distinguishing between these alternatives. Moreover the data being used needs to be free of misinformation.

Also Chatbots have been encouraged to be creative - in art, poetry and story telling. Where is the boundary between a vivid imagination and an hallucination. Something that is obviously "fake news" is perfectly acceptable as fiction.

So hallucination by Chatbots can be seen as a strength when we ask for imaginative fiction and a weakness when we want the straight unadorned truth. 

The other issue for consideration is that the output from a Chatbot is influenced by the prompt supplied by the interrogator and this might not be an insignificant factor, as reported in Magic Prompts For LLMs?. Evidence to support this idea comes from a study in which that telling a chatbot to "take a deep breath" and tackle an arithmetic problem step-by-step resulted in a higher proportion of correct solutions.

So to reduce inappropriate hallucination perhaps we should amplify our prompts to let Chatbots know whether we are interested in being told an interesting story or being given the unadorned facts.

 
cambridgedict

 

More Information

Why hallucinate?

Related Articles

AI Is Word of the Year 2023

Tell A Chatbot "Take a deep breath ..." For Better Answers

 Magic Prompts For LLMs?

Word Of The Year Is An Emoji

 

Banner


Kotlin Ktor Improves Client-Server Support
04/11/2024

Kotlin Ktor 3 is now available with better performance and improvements including support for server-sent events and CSRF (Cross-Site Request Forgery) protection.



Edera Releases Open Source Container Benchmark And Scanner
07/11/2024

Edera has released Am I Isolated, an open source container security benchmark that probes users runtime environments and tests for container isolation.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Sunday, 26 November 2023 )