How Are CIOs Dealing with ChatGPT Security Risks?
GenAI can increase productivity — but it can also create a new world of security headaches. Here's how CIOs are tackling ChatGPT and other GenAI-app-related risks.
Generative AI (GenAI) tools like ChatGPT are creating a special set of headaches for CIOs. On the one hand, these tools can unlock significant productivity gains for employees. On the other hand, they present potential threats to data privacy, regulatory compliance, and intellectual property.
The issue isn’t isolated to ChatGPT. Employees who find value in ChatGPT will eventually try other GenAI solutions. Some of these systems may have weaker guardrails. Some may even be malicious.
CIOs play a dual role in this situation — one that balances using GenAI to gain a competitive advantage against aligning its use with corporate security and compliance standards.
In this article, we’ll explore the strategies CIOs are employing to address GenAI security concerns, from policy development and access controls to advanced monitoring and incident response.
What are the security risks associated with ChatGPT?
GenAI apps like ChatGPT present five key security risks to any organization:
- Data leakage and privacy
- Intellectual property (IP) risks
- Malicious use cases
- Data poisoning
- Regulatory and compliance issues
Let’s look at each of these in detail.
Data leakage and privacy
User input risks. When employees send data to ChatGPT-style chatbots, there’s a risk that data could be stored or used to improve the model. If the submission contains users’ personally identifiable information (PII), a data leak on the GenAI tool’s side could have severe regulatory and reputational consequences.
Data retention policies. How long does a given GenAI system store the data that users send it? Who can access it? Without clear data access and retention policies, there’s a risk this information could be accessed or unintentionally exposed.
Compliance with data privacy laws. Many organizations must adhere to data privacy laws, such as GDPR or CCPA, that mandate specific handling, storage, and transfer rules for personal data. Casual use of GenAI systems like ChatGPT could lead employees to violate these regulations unintentionally.
Ensuring GenAI output is compliant. Using GenAI could itself generate unique forms of your data that are hard to identify as sensitive information. Consider the various forms that a U.S. Social Security Number could take:
123-45-6789
123 45 6789
123.45.6789
123456789
123-456-789 (altered grouping)
etc.
Different data formats, especially in unstructured text, already make compliance challenging. Adding AI-generated output into the mix increases the challenge level even further.
Intellectual property risks
Sensitive IP in prompts. Suppose an employee asks a ChatGPT-style chatbot to summarize a technical specification or fix a block of code for them. That requires uploading this sensitive intellectual property to the Large Language Model (LLM), putting that corporate IP at risk of being stolen and reused.
Risk of model training on IP. Without clear guardrails in place, there’s a possibility that IP could be used to train a GenAI system’s model — or, barring that, that the IP is inadvertently leaked (e.g., via logs or an auditing database).
IP risks of using GenAI-generated output. There’s still an active debate in the United States and elsewhere about whether AI-generated material is subject to copyright. There are also risks associated with using output based on materials that may have restrictive licensing policies.
For example, “Copyleft” is a form of open source that requires all derivative works to be covered by Copyleft. There are currently more than 3 million Copyleft-covered repositories on GitHub. What’s the legal status of AI-generated code that uses some of these repositories as training data?
The answer, currently, is “no one knows.” There are currently several GenAI-related lawsuits underway. Verdicts could affect the legal status of derivative works created by those models.
Malicious use cases
Employees may also fall victim to malicious uses of GenAI designed to steal company information or assets. CIOs need to be aware of these risks so they can educate and prepare employees. For example:
Phishing and social engineering. Some threat actors are attempting to use GenAI systems to generate traditional spam, such as phishing attacks that get users to click on legitimate-looking fake links and share sensitive information. Indeed, Singapore’s Government Technology Agency ran an experiment that found AI-generated phishing emails were more effective than human-generated messages — and took less time to boot.
Impersonation. Employees may fall prey to people who use GenAI to impersonate executives or clients. Threat actors in the U.K. used this technique to con an energy firm out of $243,000.
Automation of other harmful content. Malicious actors might use GenAI to generate large volumes of realistic-sounding disinformation or fake news designed to influence employee behavior. This information can then be used for more sophisticated phishing or social engineering attacks.
Malicious data issues
Prompt injection. Most GenAI systems are susceptible to prompt injection attacks, where external actors disguise malicious inputs to manipulate a system into outputting sensitive data or creating false information. One study found that 20% to 25% of all GenAI systems were susceptible to some form of prompt injection.
Data poisoning. Some malicious actors even attempt to inject bad data into a GenAI’s system’s training set — a technique known as data poisoning. This can open up the system to so-called backdoor attacks, where the model releases malicious results aligned with the hacker’s intentions. Data poisoning can have drastic consequences in fields such as healthcare and finance.
Regulatory and compliance issues
Compliance with industry regulations. Sectors such as healthcare, finance, and government must adhere to strict regulatory frameworks that govern data use, storage, and sharing. Failing to do so could lead to legal penalties, including fines and possibly imprisonment.
Data residency concerns. Many jurisdictions have data residency laws that require keeping customer data in its country of origin. Violating these can result in stiff penalties — as Meta found out when European Union regulators slapped it with a €1.2 billion fine for transferring EU users’ data to the U.S. GenAI programs that don’t comply with applicable data residency and transfer laws could land your company in hot water.
Strategies CIOs are using to address ChatGPT and GenAI security risks
That’s a lot of surface area to cover. The challenge is: how do you manage this without shutting down GenAI apps completely?
Here are some strategies CIOs use to unlock productive uses of GenAI while addressing these risks.
Strategy #1: Employee training and awareness
One of the most immediate steps CIOs take is to institute training programs around ChatGPT and similar GenAI apps. Training should cover the do’s and don’t’s of GenAI usage, with an emphasis on protecting sensitive information and proprietary data.
Many orgs also use real-world scenarios to drive the training home. For example, IT may manage a simulated phishing or impersonation attack using GenAI to see how many employees, based on the training, can identify the communications as malicious.
Strategy #2: Developing and enforcing AI usage policies
Formal GenAI policy documentation clearly defines what data can be processed, approved use cases, and the conditions under which GenAI can be used for customer interactions. These policies should align with industry-specific standards — such as HIPAA for health care or PCI-DSS for payment processing.
CIOs should also enunciate the consequences for failing to follow GenAI policies — e.g., termination of employment in cases involving severe risk or loss.
Strategy #3: Access controls and permission management
Another strategy is controlling who has access to GenAI tools for work. For example, the CIO’s office might approve only customer support, research and development (R&D), and specific project teams.
CIOs can employ several technologies to manage access automatically. Granular policy enforcement frameworks, Single Sign-On (SSO) integration, and secure enterprise browsers ensure that only employees in certain divisions or administrative groups can access GenAI tools.
Strategy #4: Implementing data loss prevention (DLP) and monitoring tools
Data Loss Prevention (DLP) tools monitor, identify, and flag line-of-business applications to detect when employees input potentially sensitive information into third-party systems. CIOs can use DLP in conjunction with monitoring tools to measure GenAI tool usage, identify potential misuse, and audit conversations with GenAI systems.
Strategy #5: Gated access and isolated environments
Many CIOs also consider restricting or “gating” access to GenAI services. Solutions like gateways, which act as intermediaries between the user and a SaaS app, can monitor and log all communications with an external chatbot.
Companies may also look at building an API layer that examines and sanitizes content users attempt to send to ChatGPT and other GenAI apps. Another option is creating isolated environments (sandboxes) disconnected from other corporate services.
Strategy #6 - Legal and contractual safeguards
CIOs often incorporate GenAI policies into their legal agreements, such as vendor contracts. These stipulations set rules for data handling and regulatory compliance that spell out appropriate and inappropriate uses of GenAI. These may cover topics such as:
- Requirements for data segregation and data deletion upon request
- Provisions for security audits
- Clauses clarifying data ownership, especially in cases where a company’s proprietary information may be sent to a GenAI app
For organizations that use GenAI for critical operations, the CIO may also require regular audits of GenAI vendors to verify their compliance levels. Many CIOs require that GenAI vendors like ChatGPT obtain and renew certifications such as SOC 2 and ISO 27001 to validate their security posture.
Getting ahead of ChatGPT security risks with enterprise browsers
Implementing many of these security measures to monitor GenAI usage takes time. What’s more, they usually require configuring multiple new tools and systems.
An enterprise browser like Island provides an easier way to control and monitor access to all SaaS applications, including GenAI systems, all from a single administrative interface. Using Island, you can:
- Control access to GenAI applications using RBAC
- Block unapproved solutions and gracefully redirect users to sanctioned GenAI apps
- Institute powerful and highly configurable DLP protection, including copy/paste for specific data types and file upload and download controls
- Mask output from GenAI applications to hide potentially sensitive data
- Log and monitor usage of GenAI apps across the organization
Island also includes the Island AI Assistant, which integrates ChatGPT directly into the browser. With Island AI Assistant, users can leverage the power of ChatGPT to write a cold outreach email, research competitors, or check code for bugs. Meanwhile, administrators can set DLP and access policies to keep corporate data secure.
Turning GenAI challenges into competitive advantages
The integration of generative AI tools like ChatGPT in the enterprise brings both unprecedented opportunities and critical risks. As we've seen, the key to unlocking its potential lies in adopting a balanced approach: leveraging advanced technologies, robust policies, and employee training to mitigate security and compliance issues.
CIOs who proactively address these challenges position their organizations not just to use GenAI safely, but to gain a strategic edge. With the right tools — like an enterprise browser — businesses can harness the transformative power of GenAI while safeguarding their most valuable assets.