Your clients are likely using ChatGPT and other generative AI solutions to write marketing copy, improve their customer service, generate source code for future software applications and more. While they may be excited about the time they save by using this technology, they may not be aware of the security risks. Here are three common AI security risks that your clients need to know about.
1. Employees Sharing Sensitive Company Information
It’s imperative to educate your clients on the risks of inputting sensitive work information into ChatGPT or any generative AI-powered tools. As Bloomberg reported, Samsung employees uploaded confidential source code to the ChatGPT model to generate meeting notes and summarize business reports containing sensitive work-related information.
While users can turn off chat history, ChatGPT keeps users’ records for 30 days to monitor for possible abuse. If a cybercriminal successfully hacks into ChatGPT, they could potentially access any proprietary information found in users’ queries as well as the AI tool’s responses.
2. Security Vulnerabilities Within AI Tools
ChatGPT and other AI tools have only been available to the public for a short period. These solutions are “works in progress” and can have security flaws built into them. For example, OpenAI took ChatGPT offline this past spring to address a bug in the chatbot’s open-source library. The bug allowed some users to view chat titles from other active users’ chat histories. Some active users were also able to see the first message of a new conversation in someone else’s chat history if both users were active at the same time.
And it that’s not bad enough, the same bug revealed payment information for a small percentage of ChatGPT Plus paid subscribers who were active during a specific period. During this time, the customers’ first and last names, email addresses and last four digits of their credit cards were exposed.
3. Data Poisoning and Data Theft
Knowing that data is the fuel that generative AI solutions train on, cybercriminals are deploying new forms of data poisoning and data theft.
In a data poisoning attack, bad actors attempt to manipulate the pre-training phase of the AI model’s development. They inject malicious information into the training set to produce false or harmful responses.
Cybercriminals are also stealing the data sets used to train generative AI models. If your clients don’t have proper data encryption and access controls in place, sensitive data can be visible to potential hackers.
As with any new solution introduced into your clients’ workplaces, it’s essential that you provide them with the necessary security training, guidance and controls to protect their data assets. The FortiGuard Labs team can help you stay abreast of new and emerging AI security risks. Learn more about FortiGuard Labs threat intelligence and research and Outbreak Alerts, which provide timely steps to mitigate breaking cybersecurity attacks.