Generative AI chatbots such as ChatGPT and large language models (LLMs) have taken many industries by storm. They’ve also raised concerns about sharing sensitive business information with advanced self-learning algorithms and malicious actors using them to enhance their cyberattacks. However, these tools can also help enhance your cybersecurity if used correctly.
Here are three ways AI tools can improve your security.
1. Improving threat-hunting queries
According to a Cloud Security Alliance (CSA) report, you can use ChatGPT and LLM to enhance efficiency and accelerate response times when creating threat-hunting queries. By generating queries for malware research and detection tools such as YARA, ChatGPT can assist in identifying and mitigating potential threats. Rules can be customized to meet your specific requirements and the threats you want to detect or monitor in your environment.
2. Detecting AI-generated text in cyberattacks
LLMs are currently used to generate text. OpenAI is also launching a new AI classifier that is trained to distinguish between AI-generated and human-generated text. According to the CSA, this could become a common feature found in email protection software. Identifying AI-generated text in attacks can be helpful in detecting phishing emails and polymorphic code. Per the CSA, it’s realistic to assume that LLMs could detect atypical email address senders or their corresponding domains. They could also check if underlying links in text are tied to known malicious websites.
3. Generating and transferring security code
According to the CSA report, ChatGPT can be used to generate and transfer security code. For example, if you suspect your client is a victim of a phishing campaign, it may be difficult to discern which of their employees have inadvertently executed the malicious code attached to the email they opened. ChatGPT can provide a Microsoft 365 Defender hunting query to check for compromised email account login attempts, which helps block attackers from the system and clarifies if the user needs to change their password. This process can speed up your detection and response times.
While leveraging AI tools can enhance your cybersecurity, they need to be “trained” for your specific requirements based on the data you provide. They need contextual understanding to provide accurate responses and identify security issues in your environment. Keep in mind that cybercriminals also have access to AI tools. Plan on regularly testing and evaluating your AI tools to identify potential weaknesses and vulnerabilities.