We Bring the Cloud to Healthcare

The Growing Impact of Generative AI Technology on Cyberattacks

March 21, 2023


The Growing Impact of Generative AI Technology on Cyberattacks

AI has been a hot topic for years as its use has steadily evolved. However, the explosive growth of the generative AI tool ChatGPT has recently propelled the conversation to new heights. Two months after its launch in November of 2022, ChatGPT is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer app in history. By comparison, it took TikTok about nine months to reach the 100 million user milestone, while it took Instagram two and a half years.1. 

This has heightened the importance of determining how ChatGPT and future generative AI tools will affect cybersecurity for both offensive computing experts and, unfortunately, attackers as well. The team behind ChatGPT, OpenAI, has tried to discourage or block its use for malware development. That said, as with most uses related to ChatGPT, regardless of the ethics, it is a force multiplier. ChatGPT is an AI model that has made machine learning algorithms more accessible to a broader range of individuals, including those with limited technical knowledge. This has helped reduce the need for trial and error, making it easier and faster for cybercriminals to develop more sophisticated attacks.  

To provide an example, when I started using ChatGPT for offensive computing research, I asked if it could write malware, and it said that it could not. I then asked if it could help me write a Python app to identify SMB shares on a network (which is a way to share information on a network and should typically not be turned on). It replied that it could and then produced relevant code. Next, I asked it additional questions to walk down the path of developing malware code. Out of a responsibility not to let that information get into the wrong hands, I will omit the detailed questions here (reach out if you want me to share specifics). What happened after each iteration of questions is that ChatGPT continued to modify and optimize the code that could then be used to spread malware across a network. ChatGPT wrote this code in about five minutes, which would have taken me several days to write on my own.  

At a high level, this demonstrates that if you “ask” a generative AI engine such as ChatGPT to do something malicious, it will not provide an answer initially. But, if the user understands the end goal and can shape their questions to walk a gray line between offensive and defensive, then ChatGPT can become a tool in helping an attacker get to their result in record time.  

So, how are criminal hackers using generative AI tools today? It depends on the sophistication of the attacker. Someone with basic skills could do what I did in the previous example, but someone with a strong skillset could accelerate their methodologies very quickly. In any case, what previously would have taken weeks or months to evolve, can be accomplished with ChatGPT much more quickly. 

This introduces two new risks to organizations. First, it accelerates attackers’ capabilities and evolves their sophistication—and organizations may not have the tools needed to defend themselves from these more advanced methods or tactics. The second risk is that less sophisticated attackers love low-hanging fruit. For example, given the high level of interest in ChatGPT, there has been a recent increase in the advertising of desktop and mobile apps (maliciously) that allow users to interact with ChatGPT—yet there is no app available to download for ChatGPT. Many people fall for the trap and download these apps, which are simply trojans, and then malware is introduced into the environment (note: ChatGPT doesn’t produce the malware). 

From a healthcare industry perspective, this has much more significant implications for cybersecurity. Healthcare has become one of the prime targets for cybercriminals due to the large amount of sensitive data stored within electronic health records and other healthcare systems. One of the main concerns is the potential for cybercriminals to use AI tools to infiltrate healthcare systems and steal or manipulate sensitive patient data, leading to serious health consequences.  

In closing, while cyber criminals’ use of AI tools and ChatGPT isn’t necessarily introducing new methods of cyberattacks, it could exponentially increase the frequency and volume of attacks and allow attackers to prey on users of a cool new tool where they may be letting their guard down. As the use of generative AI tools in cyberattacks continues to grow, organizations must take proactive steps to defend against such threats. This includes investing in advanced cybersecurity technologies such as AI-powered threat detection and response systems, implementing robust security protocols, and educating employees on the risks associated with phishing attacks and other forms of cybercrime.  

Interested in learning more? Contact us at customersfirst@gocloudwave.com. 

To learn more about how to advance your cybersecurity strategy and provide education for your teams, join our Cybersecurity Insider Program to get exclusive access to live monthly educational webinars, on-demand training, private YouTube and LinkedIn groups, threat intelligence, and more. Register here 

John Gomez, Chief Security and Engineering Officer, CloudWave