Get Started
Jan 28, 2023

ChatGPT is Used to Create Polymorphic Malware

Hand of businessman interaction with artificial intelligence to invest bitcoin price digital chart exchange. Concept of cryptocurrency future money and nft of blockchain with ai technology. ChatGTPAs months have gone by, ChatGPT has become a widely-known tool for its ability to assist users with various tasks, such as writing stories, providing educational information, performing legal tasks, and proofreading content. OpenAI's capabilities are highly advanced and exceed expectations regarding its functionalities. However, it's important to note that these advanced capabilities also carry the potential for harm, as it has recently been discovered that OpenAI can create polymorphic malware.

What is polymorphic malware?

Malicious software known as polymorphic malware frequently modifies its structure or outward appearance to avoid being discovered by anti-virus solutions. It accomplishes this by concealing its code through encryption, code obfuscation, or other methods. Because of this, each instance of the virus is distinct, which makes it more challenging for security systems to detect and block it. Because it can avoid typical signature-based security measures, polymorphic malware poses a serious threat to computer systems and is a favorite tool of online criminals.

Why is this a problem?

CyberArk states, “ChatGPT could easily be used to create polymorphic malware. This malware’s advanced capabilities can easily evade security products and make mitigation cumbersome with minimal effort or investment by the adversary.”

After extensive study, CyberArk experts discovered a way to bypass content controls that stop users from instructing ChatGTP to perform illicit tasks like producing malware or phishing email ideas.

How were the researchers able to create this code?

In their initial request to ChatGTP, the researchers requested that it "create a code injecting a shellcode into 'explorer.exe' in Python." At first, ChatGTP claimed it could not run this code because of cyber security problems. As a result, the engineers tried another approach that was a blind spot in requesting the software to perform a task: they asked for and insisted that the software write a specific code. Here is what they performed to get the code in the end.

The researchers stated, “Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we received a functional code.” As a result, they discovered that it is possible to repeatedly modify the code until they are happy with the results.

What does this mean for the software and its cybersecurity concerns?

Researchers have discovered that chatbots can spread malware and even use convincing human language to trick victims into infecting their systems. As Artificial Intelligence evolves and clever individuals find its blind spots, the potential for malicious AI applications is vast. Persistence and demand for AI capabilities, combined with the increasing knowledge of its weaknesses, make it unclear what malicious actions AI could achieve in the hands of bad actors. Make sure you stay alert to AI’s evolving functionalities and how they can affect your work processes.

Technology Insights

Best ways to support small business IT

Best ways to support small business IT

Small businesses form the backbone of our economy, contributing to job creation, innovation, and community...
Read More
Navigating the Complex Landscape of AI in Cybersecurity

Navigating the Complex Landscape of AI in Cybersecurity

In today’s digital age, artificial intelligence has continued to evolve and integrate into various business...
Read More
The Threat of Cyberattacks on Water Systems

The Threat of Cyberattacks on Water Systems

Water is an imperative component of life and a critical infrastructure sector. We can sometimes...
Read More
chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram