Sunday, 13 August 2023

Expect no quick solutions in the evaluation of AI models, as security was neglected

**The Hidden Dangers of AI Chatbots: Unveiling the Vulnerabilities**

*White House officials, Silicon Valley powerhouses, and cybersecurity experts gather at DefCon hacker convention to red-team AI chatbots for security flaws.*

In a groundbreaking three-day competition held at the DefCon hacker convention in Las Vegas, White House officials and cybersecurity experts are coming together to address the potential societal harm caused by AI chatbots. With over 3,500 participants and eight leading large-language models under scrutiny, this independent “red-teaming” exercise aims to expose flaws in these transformative technologies. However, the findings will not be made public until approximately February, and fixing these flaws is expected to take significant time and financial investment.

The current state of AI models reveals a disconcerting reality. Studies conducted by academic and corporate researchers have shown that these models are unwieldy, brittle, and malleable. Their training focused primarily on gathering vast amounts of data, leading to racial and cultural biases as well as susceptibility to manipulation. Security was an afterthought during their development, resulting in potentially harmful consequences for society.

Despite efforts to address security concerns post-development, the generative AI industry has repeatedly faced security breaches and vulnerabilities. Researchers have exposed these vulnerabilities by tricking AI systems into mislabeling malware and generating harmful content. Furthermore, leading chatbots have been found to be vulnerable to automated attacks, raising concerns about the very nature of deep learning models and their susceptibility to threats.

The U.S. National Security Commission on Artificial Intelligence’s 2021 report highlights the importance of protecting AI systems from attacks. However, due to inadequate investment in research and development, attacks on commercial AI systems have become more prevalent. The absence of regulations further exacerbates the problem, allowing organizations to conceal security breaches.

Chatbots, in particular, are highly susceptible to attacks due to their direct interaction with users through plain language. These interactions can alter the chatbots in unexpected ways, making them vulnerable to manipulation. Researchers have demonstrated that corrupting a small portion of the data used to train AI systems can have significant adverse effects. This “poisoning” technique, costing as little as $60, can disrupt the functioning of AI models, compromising their reliability.

The state of AI security for text- and image-based models is concerning. Organizations lack response plans for data-poisoning attacks and dataset theft, with many remaining unaware of such breaches. Although major AI players have made commitments to prioritize security and safety, there are doubts about their actual implementation. Experts predict that search engines and social media platforms will be exploited to spread disinformation and manipulate AI systems for financial gain. Privacy concerns arise as AI bots interact with sensitive systems, potentially compromising personal and confidential data.

Furthermore, there is a risk of AI systems retraining themselves from junk data, leading to the pollution of their own algorithms. Company secrets are also at risk of being ingested and leaked by AI systems, as evident from incidents reported at Samsung. These concerns have prompted companies like Verizon and JPMorgan to restrict the use of AI language models like ChatGPT in their organizations.

While major AI players have dedicated security teams, smaller competitors may lack adequate security measures, potentially multiplying vulnerabilities. With the rise of startups leveraging licensed pre-trained models, it is crucial to address these security concerns early in their development stages.

In conclusion, the DefCon competition represents a critical step towards identifying and mitigating the vulnerabilities of AI chatbots. However, it is evident that security and safety continue to be significant challenges. Addressing these challenges necessitates strong collaboration between industry, government, and the cybersecurity community. Only through proactive measures and sustained investment in research and development can we ensure the safe and responsible integration of AI chatbots into society.

*Editor’s Notes: Building secure and reliable AI chatbots is a vital concern for the future of technology. The DefCon competition serves as a significant milestone in raising awareness about the vulnerabilities of these powerful tools. As we continue to push the boundaries of AI, it is crucial to prioritize security and invest in robust safeguards. To learn more about the latest advancements in AI and cybersecurity, visit GPT News Room.* (Link: [GPT News Room](https://ift.tt/vAGxJYE)

Source link



from GPT News Room https://ift.tt/OdmfxBS

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...