3 Ways Criminals Use ChatGPT To Enhance Cyberattacks

ChatGPT has become a go-to tool for many of your clientsand for cybercriminals. Here are a few ways threat actors are using ChatGPT including foothold assistance, reconnaissance and phishing.

  • July 7, 2023 | Author: Allison Bergamo
Learn More about this topic

Article Key

ChatGPT has become a go-to tool for many of your clients—and for cybercriminals. A recent report from the Cloud Security Alliance (CSA) entitled, Security Implications of ChatGPT paper details how threat actors can exploit AI-driven systems in different aspects of cyberattacks. Following are a few ways threat actors are using ChatGPT including foothold assistance, reconnaissance and phishing.

1.    Better foothold assistance

Foothold assistance is the process of helping cybercriminals establish an initial presence within a target system or network. According to the CSA report, “This usually involves the exploitation of vulnerabilities or weak points to gain unauthorized access.”

Attackers are using AI tools to automate the discovery of vulnerabilities and simplify the process of exploiting them. This can make it easier for them to gain initial access to their targets. The CSA report lists an example in which they requested ChatGPT to examine vulnerabilities within a code sample of more than 100 lines. In this exercise, ChatGPT accurately pinpointed a file inclusion vulnerability.

2.    Faster reconnaissance

Cybercriminals are well-organized and well-educated, often deploying reconnaissance to gather information about a target system, network or organization prior to launching an attack. This allows them to identify and exploit potential vulnerabilities, weaknesses, and entry points—quickly gaining unauthorized access to an organization’s “crown jewels” aka—their systems and data. Gathering this information can be a time-consuming process. However, with ChatGPT, users can pose targeted questions that provide the information they need faster—enhancing their data collection.

3.    More effective phishing

Prior to ChatGPT, cybercriminals would “slip up” and send an email that included red flags such as spelling errors and poor grammar. This made it easier for people to spot and avoid clicking on suspicious emails. Today, AI-powered tools enable cybercriminals to craft legitimate-looking emails without those red flags. According to the CSA report:

“The rapid advancements in AI technology have significantly improved the capabilities of threat actors to create deceptive emails that closely resemble genuine correspondence. The flawless language, contextual relevance and personalized details within these emails make it increasingly difficult for recipients to recognize them as phishing attempts.”

Security leaders need to provide enhanced training since it has become more challenging to differentiate between legitimate and malicious correspondence. Now is also a good time to evaluate AI classifier tools that are trained to distinguish between AI-generated and human-generated text.

Thanks to AI-powered tools, cyberattacks are increasingly difficult to identify, monitor and stop. FortiGuard Preparedness Services can help you better prepare, assess and guide SOC teams with its Incident Response Readiness subscription and Playbook Development assistance.

Related Content