AI Chatbots Are Spreading Disinformation: Study
Disinformation, propaganda, alternative facts—the use of biased or false information has been a longstanding strategy in politics and social engineering. However, the rise of social media and advancements in AI have amplified the practice, and recent research suggests that AI is even better at spreading disinformation than humans.
A study published in Science Advances reveals that OpenAI’s GPT-3, an AI chatbot, is highly effective at disseminating disinformation. OpenAI, founded in 2015, released GPT-3 in 2020 and granted exclusive licensing to Microsoft. The study surveyed 697 participants to determine if they could identify disinformation tweets generated by GPT-3, as well as distinguish between tweets written by AI and humans.
The Impact of GPT-3’s Disinformation
The report titled “AI model GPT-3 (dis)informs us better than humans” illustrates how GPT-3 was asked to write tweets on various topics, such as vaccines, 5G technology, COVID-19, and the theory of evolution. These subjects were specifically chosen due to their susceptibility to disinformation and public misconceptions. Twitter, with its large user base primarily engaged in news and politics, was chosen as the platform for this study.
- The study selected Twitter because it has approximately 400 million regular users
- Almost 20-29% of content on Twitter is generated by bots
- This research is applicable to other social media platforms as well
Recognizing AI-Generated Tweets
Participants were then scored on their ability to recognize AI-generated tweets, with scores ranging from 0 to 1. The average score was 0.5, indicating that individuals struggled to differentiate between real and AI-generated tweets. Surprisingly, the accuracy of the information in the tweets did not significantly impact participants’ ability to identify AI-generated content.
The study concludes that advanced AI text generators like GPT-3 have the potential to significantly impact the dissemination of information. Large language models already produce text that is indistinguishable from organic content. Therefore, the emergence of more powerful models, such as GPT-4, and their impact should be closely monitored.
Concerns and Regulatory Measures
The rapid pace of generative AI development, particularly with the release of ChatGPT and GPT-4 in recent months, has sparked concerns within the tech industry. Calls for a temporary pause in AI development have arisen, emphasizing the need for regulation to prevent the misuse of AI and ensure transparency.
Additionally, the spread of AI-generated mis/disinformation and deepfakes has prompted UN Secretary-General António Guterres to advocate for an international agency, similar to the International Atomic Energy Agency (IAEA), to monitor AI’s development. Guterres warns that the proliferation of hate, lies, and misinformation in the digital space poses severe global risks, including threats to democracy, human rights, public health, and climate action.
Editor Notes
As AI continues to advance, it is crucial to address the challenges posed by disinformation and the potential harm it can cause. Reliable monitoring and regulation of AI development are necessary to safeguard individuals and society as a whole. The study’s findings highlight the urgency of this issue and emphasize the need for responsible AI practices.
For more AI-related news and developments, visit GPT News Room.
from GPT News Room https://ift.tt/U5p8akw
No comments:
Post a Comment