Tuesday, 24 October 2023

IBM’s ChatGPT Generates Convincing Phishing Emails

**The Dark Side of AI: ChatGPT’s Potential for Phishing Attacks**

OpenAI’s ChatGPT has caused a stir in the industry, with its ability to generate phishing emails that are almost as convincing as those written by humans. According to research conducted by IBM’s X-Force security team, ChatGPT can create these emails in a fraction of the time it takes a human, leading to increased concerns among corporate entities.

In an A/B experiment with a healthcare company, researchers found that employees were slightly more likely to click on a human-created phishing email and report the AI-generated message as suspicious. However, it only took the security researchers five minutes and five prompts to create a highly convincing phishing email with ChatGPT, whereas creating a similar email typically took the X-Force team about 16 hours.

Stephanie Carruthers, the global head of innovation and delivery for X-Force, highlighted the need for corporations to adapt their security practices in a report about the experiment. She emphasized that AI is constantly improving and becoming more sophisticated. Even restricted versions of generative AI models can be vulnerable to phishing via simple prompts, while unrestricted versions offer attackers more efficient ways to scale sophisticated phishing emails in the future.

ChatGPT’s Rise and Security Concerns

OpenAI’s ChatGPT made waves in November 2022, becoming the fastest-growing web app at the time. Other IT vendors, such as Google with its Bard tool, sped up their own generative AI-based tools development in response.

Enterprises quickly adopted large-language models (LLMs) like ChatGPT to enhance business efficiency. However, security concerns arose, especially regarding the potential for developers to inadvertently leak sensitive information while using ChatGPT in their coding efforts. Cybersecurity vendors also noticed threat groups experimenting with ChatGPT and creating their own generative AI chatbots, heightening concerns across organizations.

Addressing the Concerns: Creating AI-Based Phishing Emails

The X-Force researchers used a systematic process of experimentation and refinement to instruct ChatGPT to generate phishing emails tailored to specific industry sectors. The researchers had ChatGPT prioritize employee concerns within the industry and prompted the chatbot to strategically use social engineering and marketing techniques in the emails. They allowed ChatGPT to choose the sender, whether someone from the company or a vendor.

For healthcare, a prime target for such attacks, the LLM model crafted a phishing email pitching job advancement possibilities from an internal HR manager. Carruthers, with her own social engineering experience, found the AI-generated phishing emails to be fairly persuasive. Two of the three participating organizations backed out after reviewing the phishing emails, worried about their high success rate.

The research revealed that 14% of employees clicked on human-created phishing emails, while 11% clicked on AI-generated messages. Emotional intelligence, personalization, and succinct subject lines were identified as key factors. Humans excel at understanding emotions, weaving narratives that sound more realistic and tugging at heartstrings. The human-written phishing email also included the recipient’s name and a reference to a legitimate organization, while the AI-generated one had a subject line that was significantly longer.

Warnings and Recommendations

Carruthers cautioned that organizations should not be lulled into a false sense of security by these statistics. Though the use of generative AI in current campaigns is not widespread, AI-based tools like WormGTP are readily available on the dark web, indicating that attackers are testing their efficacy for phishing attacks. Carruthers stressed the need for organizations to strengthen their defenses against phishing campaigns, including contact verification when in doubt about a message’s legitimacy, training programs that address emerging forms of vishing, and reinforcing identity and access management controls. It’s essential to continuously adapt and innovate security practices to keep pace with the evolving tactics of threat groups.

Organizations must also dispel the misconception that phishing emails, often riddled with bad grammar and spelling errors, are easy to identify. With generative AI chatbots, the language used in phishing emails can be more grammatically correct. Carruthers suggested training employees to be vigilant about the length and complexity of email content, as longer emails, often a hallmark of AI-generated text, can serve as warning signs.

**Editor Notes: Protecting Against AI-Powered Phishing Attacks**

The research conducted by IBM’s X-Force security team highlights the danger that AI-powered chatbots like ChatGPT pose when it comes to phishing attacks. While humans still have the upper hand in emotional manipulation and crafting persuasive emails, the emergence of AI in phishing attacks signals a pivotal moment in social engineering tactics.

Organizations need to stay ahead of the curve by constantly updating and reinforcing their security strategies. It’s crucial to educate employees about the evolving nature of phishing attacks, provide training programs that address emerging threats like vishing, and enhance identity and access management controls. Additionally, organizations should implement contact verification protocols to verify the legitimacy of suspicious messages.

At GPT News Room, we believe in staying informed about the latest developments in AI and how they impact various aspects of our lives. Visit our website for more insightful articles and thought-provoking content on AI and its implications.

Source link



from GPT News Room https://ift.tt/eorS3fH

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...