Why are “hallucinations” so dangerous in AI?

Hallucinations pose a significant risk to both users and the companies behind chatbots. (picture information)

the Hallucinations In the context of AI, they generally refer to situations where the model generates inaccurate or relevant information. These behaviors can manifest in a number of ways, such as generating incorrect responses or producing incorrect or inconsistent content.

For example, the company’s AI chatbot OpenAI, ChatGPTI noted in a paragraph:

The coronation ceremony was held at Westminster Abbey in London. On May 19, 2023. The abbey has been the site of the coronation of British kings since the 11th century, and is considered one of the most sacred and symbolic places in the country. This information is incorrect since the event mentioned in occurred May 6, 2023.

System Chat GPT -3.5 He warns that its ability to generate responses is limited to information available on the Internet until September 2021, which means it may face challenges in providing accurate answers to inquiries. OpenAI explained at launch GPT-4 There are still many limitations such as social biases, possible hallucinations and inconsistencies in responses.

ChatGPT -3.5 has information restrictions until September 2021. (Pexels)

A second error or hallucination has been detected in the search engine Bingin which the supposed theory about the emergence of search algorithms, attributed to Claude Shannon. The result provided some quotes in an attempt to support the research article.

However, the main drawback was that Shannon I have never written an article like this before, as the quotes provided by Bing turned out to be slurs generated by artificial intelligence.

See also  Jeff Bezos and his three tips for any entrepreneur

Generative AI and reinforcement learning algorithms have the ability to process massive amounts of information on the Internet in a matter of seconds and create new texts that are often coherent and well-written.

Many experts warn that users need to be careful when considering the reliability of these scripts. In fact, both Google and OpenAI have asked users to keep this in mind.

Microsoft’s chatbot, Bing, has also suffered from hallucinations.

In case OpenAIwhich maintains cooperation with Microsoft Its search engine, Bing, notes that “GPT-4 has a tendency to ‘clutter’, meaning it can generate meaningless content or false information regarding certain sources.”

The potential risks of hallucinations in AI are numerous and can have significant impacts. The most important risks are the following:

– Misinformation and spreading false information: If AI creates false or misleading information, it can contribute to the spread of misinformation, which can be harmful in a variety of contexts, such as spreading fake news or creating inaccurate content.

-Loss of credibility: When an AI chatbot regularly generates inconsistent or incorrect content, it can lose the trust of its users, limiting its usefulness and effectiveness.

Biases and prejudices: Hallucinations can create content that reflects biases present in the training data, which may be considered discriminatory or harmful to certain groups.

The role of users in the face of this type of failure is always to verify and review the data. (Europe)

– Difficulty in critical applications: As in medical or legal decision-making, hallucinations can have serious consequences, as the resulting information must be accurate.

See also  Federal report confirms that PR can get all its energy from the sun - NotiCel - The truth as it is - Noticias de Puerto Rico - NOTICEL

– Ethical problems and responsibility: AI developers and owners may face ethical and legal challenges if the AI ​​creates inappropriate or harmful content.

to Avoids When you encounter hallucinations or errors caused by artificial intelligence, it is important to follow some guidelines such as verifying the information, and knowing the context in which the data is exchanged with the artificial intelligence. It is also important to have an idea of ​​how to train the linguistic model as well as its limitations, for example if it has information up to a certain date.

Myrtle Frost

"Reader. Evil problem solver. Typical analyst. Unapologetic internet ninja."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top