Enterprises are increasingly leveraging generalized AI, specifically ChatGPT, as a countermeasure to potential information breaches. ChatGPT, likened to the emergent ‘DNA’ of shadow IT, introduces unprecedented risk vectors, necessitating a strategic balance between agility and security by IT and cybersecurity executives. OpenAI’s data indicates a marked rise in enterprise engagement, with upwards of 80% of Fortune 500 firms integrating ChatGPT across various employee segments and departments.

A recent study from Harvard University quantifies the impact, noting a 40% enhancement in workforce productivity attributable to ChatGPT implementation. Complementarily, MIT research highlights ChatGPT’s role in diminishing skill disparities and expediting document production, thereby augmenting enterprise efficiency. Despite these advancements, a notable 70% of employees utilizing ChatGPT have not disclosed its usage to their supervisors, underscoring a potential disconnect in organizational transparency.

Reducing the risk of intellectual property loss without sacrificing speed

The paramount risk associated with ChatGPT involves inadvertent disclosures of sensitive information like intellectual property, financial data, and HR records through large language models (LLMs) that are widely accessible. Instances of companies unintentionally revealing confidential data remain a significant concern for security and senior management professionals.

Addressing this issue urgently hinges on influencing user behavior. Consequently, many organizations are exploring generative AI-based solutions to this security conundrum. The focus is intensifying on technologies such as Generative AI Isolation, which aim to safeguard sensitive data from ChatGPT, Bard, and other similar platforms. Businesses are striving to strike a balance between the efficiency, speed, and process enhancements offered by ChatGPT and the imperative to mitigate associated risks.

In this context, Alex Philips, CIO of National Oilwell Varco, has emphasized the importance of educating corporate leadership about the merits and dangers of ChatGPT and generative AI. He engages in regular briefings with his board on the evolution of generative AI technologies, focusing on optimizing benefits while minimizing risks. This continuous educational effort is instrumental in setting realistic expectations and establishing protective measures to prevent data breaches.

To address the security challenges posed by ChatGPT, an array of new technologies is emerging. Solutions from Cisco, Ericom Security’s Generative AI isolation by Cradlepoint, Menlo Security, Nightfall AI, Wiz, and Zscaler represent some of the most prominent systems in the market. These technologies are designed to secure ChatGPT interactions without compromising on operational efficiency, providing security leaders with tools to tackle this pressing issue.

How vendors are taking on the challenge

The six leading providers in the domain of securing confidential data during ChatGPT sessions adopt distinct methodologies. Ericom Security by Cradlepoint’s Generative AI Isolation and Nightfall for ChatGPT are at the forefront in terms of adoption.

Cradlepoint’s strategy involves a clientless framework. User interactions with generative AI platforms are managed via a virtual browser within the Ericom Cloud Platform. This design focuses on enabling data loss prevention and the implementation of access policies within their cloud infrastructure. By directing all traffic through their proprietary cloud system, Cradlepoint aims to prevent the submission of personally identifiable information (PII) and other sensitive data to generative AI platforms like ChatGPT. Their approach is notable for its emphasis on providing minimal access rights through cloud-based architecture.

Conversely, Nightfall AI presents three tailored solutions for organizations aiming to safeguard their confidential information from exposure on ChatGPT and similar platforms. These include Nightfall for ChatGPT, a browser-based tool that identifies and obscures sensitive data in real-time; Nightfall for LLMs, an API designed to detect and redact confidential data utilized in training LLMs; and Nightfall for SaaS, which integrates with mainstream SaaS applications to prevent the leakage of sensitive data within various cloud environments.

Gen AI is defining the future of knowledge now

Generative AI represents a pivotal knowledge resource for businesses. It has been observed that prohibiting the use of ChatGPT, Bard, and similar AI-driven chatbots often leads to unintended consequences. Efforts to restrict their usage tend to encourage the proliferation of ‘Shadow AI’, exacerbating the challenge of maintaining data confidentiality as users seek out alternative AI applications.

An increasing number of CIOs and CISOs are adopting a more strategic approach, testing and implementing generative AI systems that can mitigate risks at the browser level. Techniques like utilizing secure cloud architectures, exemplified by Cradlepoint Ericom, offer scalable solutions for large enterprises to prevent unintentional sharing of sensitive information.

The primary objective is to transform the rapid advancements in generative AI into a strategic asset. It falls upon IT and security professionals to facilitate this transition. CISOs and security teams must continuously update their knowledge of emerging technologies and strategies to protect confidential, personally identifiable, and patented data. Understanding the evolving landscape of data protection solutions is vital for maintaining a competitive edge in a knowledge-driven business environment.

Gerry Grealish, Vice President of Marketing at Ericom Cybersecurity Unit of Cradlepoint, emphasizes the need for proactive measures in managing the risks associated with generative AI websites. He notes, “Our Generative AI Isolation solution is designed to enable businesses to leverage the benefits of generative AI while effectively countering potential data loss, malware threats, and compliance challenges.” This approach seeks to balance the exploitation of generative AI’s capabilities with the imperative of securing critical data.