Friday 28 July 2023

Researchers discover vulnerabilities in chatbots like OpenAI’s ChatGPT, making guarding against deception challenging

**AI Security Vulnerability: Guardrails of Language Models Overcome**

Welcome to July’s special edition of Eye on A.I. In a groundbreaking discovery, researchers from Carnegie Mellon University and the Center for A.I. Safety have found a way to bypass the guardrails imposed on language models (LLMs) to prevent them from engaging in harmful behavior. This poses a significant threat to the deployment of LLMs in public applications, as attackers can manipulate the models to produce racist or sexist dialogue, create malware, and perform other malicious actions.

The researchers’ attack method proved effective against popular chatbots like OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing Chat, and Anthropic’s Claude 2. However, it is particularly alarming for open-source LLMs, including Meta’s LLaMA models. These models are most vulnerable when the attacker has access to the entire A.I. model, including its weights. By leveraging this information, the researchers were able to develop a computer program that identified suffixes capable of overriding the guardrails.

To human eyes, these suffixes appear as a string of random characters and nonsense words. Yet, due to the statistical connections formed by LLMs, these strings deceive the model into providing the desired response. While some well-known phrases like “Sure, here’s…” can sometimes manipulate chatbots to give helpful responses, the automated strings discovered by the researchers had a higher success rate.

The researchers achieved an almost 100% success rate against Vicuna, an open-source chatbot built on Meta’s original LLaMA. Even against Meta’s latest LLaMA 2 models, which were designed with stronger guardrails, the attack method achieved a 56% success rate for individual malicious behavior and an 84% success rate when multiple attacks were attempted. Similar results were observed across various other open-source A.I. chatbots.

Surprisingly, the same attack suffixes also worked against proprietary models, where access is limited to a public-facing prompt interface. Although the researchers couldn’t fine-tune the attack specifically for these models due to unavailability of weights, they speculate that similarities between the open-source models and GPT-3.5, used by ChatGPT, could account for the success. It also raises questions about whether Bard’s dataset draws from ChatGPT, despite Google’s denial.

Zico Kolter, a professor at Carnegie Mellon, noted that the attacks on proprietary models may be attributed to the nature of language itself and how deep learning systems process it statistically. Exploring language data revealed obscure regulatory features formed by characters and tokens that convey meaning to the models. Interestingly, Anthropic’s Claude 2 model, which uses constitutional A.I., exhibited greater resilience against attacks derived from open-source models.

However, the researchers emphasize that this vulnerability should not discourage the open-sourcing of powerful A.I. models. By making the models accessible to a larger community, researchers can work together to develop better solutions and defenses against attacks. Restricting models to proprietary domains would only limit access to those with financial resources, leaving only nation states and well-funded rogue actors capable of exploiting LLMs. It is crucial to foster collaboration among academic researchers to prevent such exploits.

In conclusion, the recent research by Carnegie Mellon highlights a significant AI security vulnerability in language models. However, it also underscores the importance of maintaining open-source models for the collective effort in enhancing model robustness. With collaborative work, it is possible to create better defense mechanisms and countermeasures against malicious attacks on AI systems.

**Editor Notes**

Opinion Piece: Embracing Collaboration and Safeguarding A.I. Systems

The research conducted by Carnegie Mellon University and the Center for A.I. Safety sheds light on an essential aspect of A.I. development and security. The discovery of vulnerabilities in guardrails of language models serves as a wake-up call for the industry to address potential threats that can arise from powerful A.I. systems.

While it is concerning to learn that attackers can manipulate A.I. models for malicious purposes, it is crucial to maintain an open and collaborative approach to address these issues effectively. Open-source models have proven to be indispensable for advancing research and identifying vulnerabilities that might otherwise go unnoticed.

As tempting as it may be to restrict access to A.I. models, doing so would only limit progress and leave the development of secure systems in the hands of a select few. By allowing a broader community of researchers to experiment and contribute, the collective effort can lead to stronger defenses against potential attacks.

Collaboration has always been a driving force behind technological advancements. It is through open dialogue, knowledge sharing, and collective problem-solving that we can create a safer A.I. landscape. The recent research exemplifies the importance of maintaining this ethos, even in the face of security challenges.

As we continue to explore the capabilities of artificial intelligence, it is imperative that we keep security at the forefront. By fostering collaboration and embracing open-source models, we can work towards building robust A.I. systems that benefit humanity while remaining resilient against potential threats.

For more news and insights on the latest developments in A.I., visit [GPT News Room](https://gptnewsroom.com).

Source link



from GPT News Room https://ift.tt/vkWcxym

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...