Thursday 15 June 2023

Humans and AI Experience Hallucinations Differently

The launch of highly proficient large language models (LLMs) like GPT-3.5 has ignited significant interest in recent months. However, users have grown wary of these models as they have discovered that they are prone to making errors and are far from perfect, much like humans.

When an LLM produces inaccurate information, it is considered to be “hallucinating”. This has led to an increasing research effort aimed at minimizing this effect. But as we tackle this challenge, it is important to reflect on our own capacity for bias and hallucination, and how this impacts the accuracy of the LLMs we create.

By understanding the connection between AI’s potential for hallucination and our own, we can develop smarter AI systems that ultimately reduce human error.

Understanding Human Hallucination

It is no secret that people sometimes fabricate information. This can be intentional or unintentional. The latter occurs due to cognitive biases or “heuristics” – mental shortcuts that we develop through past experiences.

These shortcuts are a necessity because we can only process a limited amount of information at any given moment and remember only a fraction of all the information we have ever been exposed to.

Therefore, our brains rely on learned associations to fill in the gaps and quickly respond to questions or problems. In other words, our brains make educated guesses based on limited knowledge. This is known as “confabulation” and is an example of a human bias.

Our biases can result in poor judgment. For example, the automation bias is our tendency to favor information generated by automated systems, such as ChatGPT, over information from non-automated sources. This bias can cause us to overlook errors and act on false information.

Other relevant heuristics include the halo effect, where our initial impression of something influences our subsequent interactions with it, and the fluency bias, which describes our preference for information presented in an easy-to-read manner.

In summary, human thinking is often influenced by cognitive biases and distortions, and these tendencies largely occur without our awareness.

How AI Hallucinates

In the context of LLMs, hallucination is different. An LLM does not attempt to conserve limited mental resources to make sense of the world efficiently. In this context, “hallucination” simply refers to a failed attempt to predict a suitable response to input.

However, there are similarities between how humans and LLMs hallucinate, as LLMs also try to “fill in the gaps”.

LLMs generate a response by predicting which word is most likely to come next in a sequence based on what has come before and the associations the system has learned during training.

Like humans, LLMs strive to predict the most likely response. But unlike humans, they do this without understanding the meaning behind their words. This is why they sometimes produce nonsensical outputs.

There are various factors that contribute to LLM hallucination. One major factor is training on flawed or insufficient data. Other factors include the learning mechanisms programmed into the system and how these mechanisms are reinforced through additional training using human input.

Read more:
AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?

Improving Together

So, if both humans and LLMs are susceptible to hallucination, which is easier to fix?

Fixing the training data and processes underlying LLMs may seem easier than fixing ourselves. However, this perspective fails to consider the human factors that influence AI systems and reflects yet another human bias known as the fundamental attribution error.

The reality is that our shortcomings and the shortcomings of our technologies are interconnected. Fixing one will contribute to fixing the other. Here are some ways we can achieve this:

Responsible data management: Biases in AI often stem from biased or limited training data. Addressing this issue involves ensuring that training data are diverse and representative, building bias-aware algorithms, and using techniques like data balancing to eliminate skewed or discriminatory patterns.

Transparency and explainable AI: Despite taking the aforementioned actions, biases can persist in AI systems and be challenging to detect. By investigating how biases can infiltrate and propagate within a system, we can better explain the presence of bias in the outputs. This forms the basis of “explainable AI,” which aims to make AI systems’ decision-making processes more transparent.

Putting the public’s interests first: Recognizing, managing, and learning from biases in AI requires human accountability and integrating human values into AI systems. This entails ensuring that stakeholders represent individuals from diverse backgrounds, cultures, and perspectives.

By working together in this manner, it is possible to develop smarter AI systems that help mitigate our tendencies to hallucinate.

For example, AI is being utilized in healthcare to analyze human decisions. Machine learning systems can detect inconsistencies in human data and provide prompts that raise awareness to clinicians. This improves diagnostic decisions while maintaining human accountability.

In the realm of social media, AI is used to train human moderators in identifying abuse, as seen in projects like Troll Patrol, which focuses on combating online violence against women.

In another instance, the combination of AI and satellite imagery enables researchers to analyze differences in nighttime lighting across regions as a proxy for relative poverty. This information aids in understanding the economic status of different areas.

Importantly, while we strive to enhance the accuracy of LLMs, we should not overlook how their current limitations reflect the shortcomings of our own cognitive processes.

Editor Notes

As we continually explore and develop AI technologies, it is crucial to acknowledge the limitations and biases that exist within ourselves and our creations. By understanding our own tendencies for bias and hallucination, we can align the development of AI systems with our values, ensuring they contribute positively to society.

For the latest news and insights on AI, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/oe5cPAN

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...