Thursday 29 June 2023

Study finds AI-written tweets using tools like ChatGPT are more credible than human-written text

The Rise of AI Text Generators: Can We Spot Misinformation?

AI text generators like ChatGPT, Bing AI chatbot, and Google Bard have gained significant attention in recent times. These powerful language models are capable of producing impressive pieces of writing that appear entirely legitimate. However, a new study suggests that humans might be easily fooled by the misinformation generated by these AI systems.

To investigate this phenomenon, researchers from the University of Zurich conducted an experiment to determine if people could distinguish between content written by humans and text generated by GPT-3, which was announced in 2020 (not as advanced as GPT-4 introduced earlier this year). The results were surprising, as participants could only marginally perform better than random guessing, achieving an accuracy rate of 52 percent. Determining whether a text was authored by a human or an AI proved to be a challenging task.

So, what sets GPT-3 apart? In reality, it does not truly comprehend language like we do. Instead, it relies on patterns it has learned from analyzing how humans use language. While GPT-3 is beneficial for tasks such as translation, chatbots, and creative writing, there are risks associated with its misuse, including the spread of misinformation, spam, and fake content.

According to the researchers, the rise of AI text generators coincides with another issue we currently face: the “infodemic.” This refers to the rapid spread of fake news and disinformation. The study raises concerns about the potential use of GPT-3 to generate misleading information, particularly in critical areas like global health.

The Impact of GPT-3-Generated Content

To assess the influence of GPT-3-generated content on people’s understanding, the researchers conducted a survey. They compared the credibility of synthetic tweets created by GPT-3 with those written by humans, focusing on topics known to be plagued by misinformation, such as vaccines, 5G technology, Covid-19, and evolution.

The results were surprising yet again. Participants more frequently recognized the synthetic tweets containing accurate information compared to those written by humans. Similarly, they considered the disinformation tweets generated by GPT-3 to be accurate more often than those created by humans. Thus, GPT-3 proved to be both more effective at informing and misleading people than human authors.

Moreover, participants took less time to evaluate the synthetic tweets compared to the human-written ones. AI-generated content appears to be easier to process and evaluate. However, it is important to note that humans still outperformed GPT-3 when it came to determining the accuracy of information.

Furthermore, the study revealed that GPT-3 generally adhered to producing accurate information when requested. However, there were instances where it deviated from this and refused to generate disinformation. It possesses the ability to decline the dissemination of fake content, but occasional slip-ups may occur when attempting to provide accurate information.

This study highlights our vulnerability to misinformation generated by AI text generators like GPT-3. While these systems are capable of producing highly credible texts, it is crucial for us to remain vigilant and develop effective tools to detect and combat misinformation.

Editor Notes

The findings of this study shed light on the potential dangers of AI-generated content, particularly in terms of misinformation. As technology continues to advance, it is essential for both researchers and technology companies to prioritize the development of robust systems that can accurately detect and counteract false information. Additionally, individuals must remain cautious and critical of the information they encounter, especially in areas where misinformation is prevalent, such as health and scientific topics.

For the latest news and insights on artificial intelligence and technology, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/OarWEBl

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...