Thursday 26 October 2023

ChatGPT Develops Code Capable of Exposing Sensitive Information from Databases

A Vulnerability in Open AI’s ChatGPT Exposed by Researchers

Introduction

In a groundbreaking study, researchers discovered a potential vulnerability in Open AI’s ChatGPT and other commercial AI tools. This vulnerability could have been exploited by malicious actors to leak sensitive information, delete critical data, or disrupt database cloud services. The findings have prompted companies like Baidu and OpenAI to make changes to prevent potential misuse of their AI tools. This study is the first of its kind to expose the vulnerability of large language models and their susceptibility to be used as an attack path in online commercial applications.

Manipulating AI Tools

The researchers focused on six AI services that utilize Natural Language Processing to convert human questions into SQL programming language. These “Text-to-SQL” systems, including OpenAI’s ChatGPT, enable users to generate SQL code to interact with databases. The researchers demonstrated how this AI-generated code can be manipulated to include instructions that leak database information, which could lead to future cyberattacks. Additionally, the manipulated code could potentially delete vital data, overwhelm cloud servers with denial-of-service attacks, and compromise authorized user profiles stored in system databases.

OpenAI’s ChatGPT Vulnerability

In their testing conducted in February 2023, the researchers discovered that OpenAI’s ChatGPT could generate harmful SQL code, even if the user’s intent was innocent. For example, a nurse interacting with clinical records could unintentionally be given SQL code that damages the database. The researchers promptly informed OpenAI about their findings. OpenAI has since taken measures to address and rectify the vulnerability, thereby safeguarding users from potential harm.

Baidu-UNIT Vulnerability

The researchers also uncovered similar vulnerabilities in Baidu-UNIT, an intelligent dialogue platform developed by the Chinese tech giant Baidu. Baidu-UNIT automatically converts client requests written in Chinese into SQL queries for Baidu’s cloud service. Upon receiving the researchers’ disclosure report, Baidu acknowledged the weaknesses and patched the system by February 2023.

Text-to-SQL Vulnerabilities

While large language models like ChatGPT are more easily susceptible to manipulated code, systems like Baidu-UNIT, which rely on prewritten rules, can also be vulnerable. According to Xutan Peng, co-lead researcher, the security risks associated with these vulnerabilities have been underrated until now. Despite these risks, Peng still sees the potential benefits of using large language models for database querying purposes.

Conclusion

This pioneering study highlights the importance of addressing vulnerabilities in AI tools and the potential for malicious actors to exploit them. Companies like OpenAI and Baidu have taken steps to enhance the security of their systems, but ongoing vigilance is crucial. As AI continues to evolve, it is vital to prioritize security to ensure the safe and responsible use of these powerful technologies.

Editor Notes

GPT News Room provides up-to-date news and insights related to artificial intelligence, machine learning, and Natural Language Processing. Stay informed about the latest advancements and trends in the world of AI.

Source link



from GPT News Room https://ift.tt/7qgs68F

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...