Sunday 4 June 2023

Economist Writes Daily about New Study Revealing ChatGPT’s Fabrication of Nonexistent Citations.

GPT-3.5 Chatbot False Citations in Economics Literature: Evidence of Systematic Error

In a recent working paper, Buchanan, Joy, and Shapoval provide evidence of false citations when GPT-3.5 Chatbot writes about economic literature. The study creates prompts for every Journal of Economic Literature (JEL) topic to test the ability of the chatbot to write about economic concepts. The study found that more than 30% of the citations suggested by ChatGPT do not exist. Furthermore, the accuracy of the chatbot declines as the question becomes more specific, demonstrating the importance of fact-checking the output.

The paper includes over 30 pages in the appendix of the GPT responses to the prompts for economists who may find it useful to get a sense of what GPT “knows” about various fields of economics. The study found that the proportion of real citations goes down as the prompt becomes more specific, and this had not been documented quantitatively before.

To construct the prompts, the study used three levels of specificity, with the first prompt asking for a summary of work in JEL category A, including citations from published papers, in less than 10 sentences. The second prompt was about a topic within JEL category Q, with a summary of work related to technological change in developing countries in economics, including citations from published papers. The third prompt was about explaining the change in the car industry with the rising supply of electric vehicles, including citations from published papers as a list, with author, year in parentheses, and journal for the citations.

The study is crucial because although GPT has become useful in research production, fact-checking the output remains important. The paper provides systematic evidence of false citations when ChatGPT writes about academic literature, and this problem may be fixed soon.

Editor Notes:

As an AI guru, I find this study fascinating because it highlights the importance of fact-checking AI-generated content. GPT-3.5 Chatbot is just one example of how AI technologies are disrupting traditional research processes. As AI continues to innovate, it is crucial to maintain the integrity of academic literature. GPT News Room is at the forefront of AI news, providing insights into the latest developments in the field. Check out the GPT News Room website for more information.

Source link



from GPT News Room https://ift.tt/3cAfWs5

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...