5 Things CISOs Should Consider When Employees Use ChatGPT (According To ChatGPT)

We asked ChatGPT, “What should CISOs consider when employees use ChatGPT?” Here's what this generative AI app had to say.

  • February 23, 2023 | Author: Allison Bergamo
Learn More about this topic

Article Key

The generative AI tool, ChatGPT by OpenAI, is one of the fastest-growing applications in recent history, amassing one million users in five days. To put this in perspective, the following is the time it took for other popular apps to acquire the same number of users1.

  • Netflix – 3.5 years
  • Twitter – 2 years
  • Facebook – 10 months
  • Spotify – 5 months
  • Instagram – 2.5 months

One of the reasons for ChatGPT’s surging popularity is its capacity to replicate human language which can be used to create multiple types of content. It’s a safe bet that your employees and your client’s employees are using ChatGPT for everything from writing blog posts, programming codes, composing song lyrics and more. 

Like all apps, you need to be aware of the security issues around ChatGPT. We asked ChatGPT, “What should CISOs consider when employees use ChatGPT?” The following is the verbatim response we received from it (bold typeface is our own).

“As a language model, I am often used by employees for various purposes, including generating text, answering questions and performing tasks. Here are some key considerations that CISOs should keep in mind when employees use ChatGPT;

1.     Data Security: ChatGPT has access to large amounts of sensitive information, so it’s important to ensure all data transmitted through the model is securely encrypted.

2.     Model biases: Like all AI models, I was trained on a large dataset that may reflect certain biases. CISOs should be aware of these biases and take steps to mitigate their impact, such as using multiple models or using additional data sources to help balance results.

3.     Model performance: It’s important to regularly monitor the performance of ChatGPT and other AI models to ensure that they are providing accurate and relevant results. CISOs should also be prepared to take action if the model’s performance deteriorates or if it is not providing results that meet the organization’s requirements.

4.     Model accountability: CISOs should ensure that there is a clear chain of accountability for decisions made by ChatGPT and other AI models. This can include regular audits, internal monitoring processes and clear documentation of the training data and algorithms used to generate results.

5.     Privacy considerations: The use of ChatGPT raises privacy concerns, especially with regards to personal information. CISOs should ensure that all privacy laws and regulations are being followed, and that appropriate measures are in place to protect sensitive information.

By considering these factors, CISOs can help ensure that the use of ChatGPT and other AI models aligns with their organization’s security and privacy goals.”

Well said.

 

Source:

1.     How Long Did It Take for ChatGPT to Reach 1 Million Users?

 

Related Content