Tuesday 4 July 2023

Microsoft Researchers Present a Groundbreaking Approach to LLM Calibration Using Pareto Optimal Self-Supervision Eliminating the Need for Labeled Training Data

GPT Models and the Challenge of Hallucination in Language Generation

Recent advancements in large language models (LLMs), such as GPT-4, PaLM, and LLaMA, have shown incredible capabilities in problem-solving and natural language understanding. However, when these models are used in critical industries like healthcare and biology, the issue of hallucination poses a significant challenge.

Hallucination refers to instances where LLMs generate responses that are not based on accurate information or lack confidence in their outputs. Unfortunately, there are no foolproof techniques available to detect hallucinations or accurately measure the level of confidence in LLM responses. This lack of confidence estimation is particularly problematic in applications that require high accuracy and dependability.

Evaluating Confidence in LLM Replies

There are two main categories of methods for assessing the degree of confidence in LLM replies. The first category involves probing the LLM in various ways to generate multiple responses, which are then used to infer the dependability of the answer. Examples of such techniques include self-consistency and chain-of-thought prompting. However, these methods are often subjective and susceptible to biases induced by the LLM model.

The second category involves relying on external sources of data to evaluate confidence. This can be done by hiring human reviewers to verify answers or utilizing large amounts of labeled data to create assessment models. However, these approaches require extensive manual annotation work and can be costly.

The Approach: Pareto Optimum Learning

Researchers from Microsoft propose a flexible framework that combines data from both the LLM response and external supervision sources using Pareto optimum learning. This approach addresses the issue of biases induced by the LLM model and improves calibration power.

Pareto optimum self-supervision offers a useful framework for integrating both LLM response and supervision sources. The researchers suggest using the Pareto Optimum Learning assessed risk (POLAR) score to calculate the likelihood of LLM mistakes. Experimental findings on four NLP tasks demonstrate the effectiveness of the POLAR score in assessing LLM error rates and improving performance.

Advantages of Pareto Optimum Self-Supervision

Pareto optimum self-supervision has several advantages over traditional supervised model training. First, it requires only unlabeled data, making it suitable for fields where annotation is expensive. Second, it leverages patterns in the data and external expertise to adaptively improve LLM performance. Third, it eliminates LLM mistakes without the need for human-labeled training data.

Conclusion

The use of large language models like GPT-4 in various sectors has shown great promise in solving complex problems. However, the issue of hallucination remains a challenge, especially in critical industries like healthcare and biology. The proposed framework of Pareto optimum self-supervision provides a viable solution to improve LLM calibration and mitigate the risks associated with hallucination.

Editor Notes

GPT News Room aims to deliver the latest updates and insights into the world of artificial intelligence and language models. Stay informed with our GPT News Room platform, where you can access news, research articles, and cool AI projects. Don’t miss out on the incredible advancements happening in the AI field!

Source link



from GPT News Room https://ift.tt/GRzMjCY

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...