How ChatGPT is Being Used to Generate Polymorphic Malware
ChatGPT has been all the rage since it began gaining popularity at the end of 2020. However, this large language model (LLM) has cybersecurity experts worried. They’ve demonstrated that one of the key problems with ChatGPT and other LLMs is their ability to generate mutating code that can evade endpoint detection and response (EDR) systems. The dangers associated with this exploit have become known as “prompt engineering.”
Cybersecurity companies like KSOC and GitGuardian have raised concerns about ChatGPT’s potential for creating polymorphic, or mutating, malware. Those who are adept at prompt engineering can bypass ChatGPT’s content filters to give input prompts that result in malicious code. By treating prompts as hypotheticals, for example, they can “jailbreak” the software and make it create content that it was not supposed to.
ChatGPT has working content filters that prevent it from creating harmful content. However, by bypassing these filters, hackers can make the system generate effective, malicious code.
Jeff Sims, a principal security engineer at HYAS InfoSec, demonstrated how prompt engineering can be used by publishing a proof-of-concept whitepaper detailing a working model for such an exploit. Sims built a polymorphic keylogger payload called BlackMamba by using ChatGPT to create new code every time it was queried. Sims noted that his Python code was able to evade an industry-leading EDR application multiple times, though he did not specify which one.
Although there have been steps made to regulate AI, there is still a lot of work that needs to be done to make it safe and reliable. Companies like Forrester propose using explainability and observability to provide context and management capabilities. Regardless, this is just one example of how AI can be dangerous in the hands of malicious actors.
Editor Notes:
ChatGPT is not just a powerful tool for generating content – it’s also a powerful tool for generating polymorphic malware. Although improvements to ChatGPT’s security have been made in recent updates, hackers can still use it to create effective, dynamic malware. As AI systems continue to develop, it’s essential to ensure that they don’t become a weapon for bad actors.
For more AI-related news and insights, visit GPT News Room.
Source link
from GPT News Room https://ift.tt/C9YfKt1
No comments:
Post a Comment