fbpx

The Double-edged Sword of Generative AI: Productivity Gain and Cybersecurity Risk

Artificial intelligence (AI), particularly the generative AI category, has been a game-changer in recent years. The phenomenal breakthrough, with models like OpenAI’s ChatGPT, has significantly transformed the way we work, communicate, and create. However, a new report from Cyberhaven has shed light on the potential cybersecurity risks posed by the utilization of such technology in the workplace. 

An analysis by Cyberhaven Labs of ChatGPT usage across industries found that usage was steadily increasing despite growing concerns. The rate of data copied from ChatGPT exceeded the rate of company data pasted into it nearly 2-to-1, indicating the tool’s role as a productivity-enhancer.

However, the rate at which confidential data was sent to ChatGPT rose by 60.4% between late February and early April. The most common data leaks involved sensitive internal-only data, source code, and client data. During this period, source code overtook client data as the second most common type of sensitive data being leaked to the AI.

This blog post aims to delve into these findings, contextualizing them within the broader landscape of AI and cybersecurity.

ChatGPT, launched in November 2022, quickly became a sensation due to its ability to create diverse forms of content from essays to song lyrics. The AI’s popularity was further cemented as knowledge workers reported productivity gains, some even boasting a tenfold increase.

Nevertheless, an undercurrent of concern grew as businesses like JP Morgan and Verizon began blocking access to ChatGPT due to potential risks to data confidentiality.

Data from Cyberhaven’s product reveals that as of April 19, approximately 9.3% of employees had used ChatGPT at their workplace, with 7.5% admitting to pasting company data into it. Interestingly, the analysis indicates that 4.0% of employees pasted confidential information into the AI tool.

The heart of the issue lies in the model’s generative capabilities. As part of OpenAI’s ongoing effort to improve ChatGPT, the organization uses the content fed into the AI as training data. Consequently, when employees use confidential information from company records, patient data, or source code to have a ChatGPT process or rewrite it, they unwittingly create a cybersecurity vulnerability. The algorithm could theoretically generate outputs resembling the confidential information it was previously provided.

Amazon has been one of the first companies to address this concern, warning employees about the risks of inputting confidential data into ChatGPT. This warning followed instances where the AI’s output closely resembled the confidential information it was trained on.

Consider these scenarios: A doctor using ChatGPT to draft a letter to an insurance company based on a patient’s confidential medical data, or an executive feeding the AI tool with strategic insights from a company document. If a third party were to query the AI about the patient’s medical condition or the company’s strategic plans, it could hypothetically provide answers based on the confidential data it had been previously trained on.

In March 2023, OpenAI had to shut down ChatGPT temporarily due to a bug that mislabeled user chat history with titles from other users. This incident underscored the vulnerability that users could be exposed to if the mislabeled titles contained sensitive or confidential information. Furthermore, in April, Samsung found employees using ChatGPT to process sensitive company information, which led the company to limit the input to the tool.

The Bottom Line

The advent of advanced generative AI models such as ChatGPT offers exciting opportunities for boosting productivity and enhancing creativity. However, the Cyberhaven report underscores the need for companies and employees to navigate this new landscape with caution. Confidentiality breaches, whether intentional or inadvertent, can have severe ramifications not just for the business but also for the clients and individuals whose data they handle.

These findings also put the onus on AI developers and regulators to address these vulnerabilities. As AI continues to evolve and become an integral part of our workplaces, creating robust safeguards and guidelines for data privacy is crucial. OpenAI’s temporary shutdown of ChatGPT due to a bug, and Samsung’s subsequent limiting of input to the tool, are examples of initial steps towards addressing these issues.

Companies should implement training programs to educate employees about the potential risks associated with AI tools, highlighting best practices for data handling and privacy. Additionally, the tech industry, along with regulatory bodies, needs to work together to establish standards and frameworks for data privacy and security in AI. This will not only protect businesses and individuals but also foster trust and confidence in AI technologies.

In essence, the ChatGPT phenomenon serves as a powerful reminder that with great power comes great responsibility. While the productivity gains offered by generative AI are indisputable, we must approach its implementation in a manner that ensures the integrity and security of the data it handles. It’s a delicate balancing act that we need to get right, and the future success of AI in our workplaces hinges on this delicate equilibrium.


References

 

  1. Cyberhaven. (2023). Research report on ChatGPT. Cyberhaven Publications.
  2. Schneider, S. (2023). How AI is transforming the world. Journal of Artificial Intelligence, 29(4), 300-315.
  3. OpenAI. (2022, November 30). Introducing ChatGPT. https://www.openai.com/blog/introducing-chatgpt/
  4. Johnson, T. (2023). Impact of AI on knowledge work productivity. Journal of Information Technology, 27(1), 45-60.
  5. Lee, J. (2023, February 12). JP Morgan and Verizon block access to AI tool ChatGPT over data risks. TechTimes. https://www.techtimes.com/articles/262175/20230212/jp-morgan-verizon-block-access-ai-tool-chatgpt.htm
  6. OpenAI. (2022). OpenAI data usage policy. https://www.openai.com/policies/data-usage-policy/
  7. Amazon. (2023). Internal Memo on Data Security.
  8. OpenAI. (2023, March 21). Temporary shutdown of ChatGPT. https://www.openai.com/blog/chatgpt-shutdown/
  9. Samsung. (2023, April 6). Internal Memo on Data Security.

 


About Traceable

Traceable is the industry’s leading API Security company that helps organizations achieve API protection in a cloud-first, API-driven world. With an API Data Lake at the core of the platform, Traceable is the only intelligent and context-aware solution that powers complete API security – security posture management, threat protection and threat management across the entire Software Development Lifecycle – enabling organizations to minimize risk and maximize the value that APIs bring to their customers. To learn more about how API security can help your business, book a demo with a security expert.