Free Consultation
No items found.

Data Security Concerns in the Rise of Generative AI

August 6, 2024

Data Security Concerns in the Rise of Generative AI

Discover the top GenAI integrations, their impact on data security, legal implications for businesses, and key elements of an AI security policy  

Generative Artificial Intelligence (GenAI) is a growing trend. As these technologies are relatively new, they are often provided as Software-as-a-Service (SaaS), requiring security teams to address the expanded risk surface within their cloud security programs. GenAI can be entirely new applications like ChatGPT, enhancements to existing applications like Microsoft 365 Copilot, Google Gemini, or integrations within essential business SaaS applications like Slack or Google Workspace. While these tools improve working efficiency, they also raise serious AI and data security and compliance concerns.  

Talks on generative AI security have intensified following multiple reports of sensitive data leaks through OpenAI’s ChatGPT. For example, in March 2023, a ChatGPT bug was found to have leaked user payment data. In another instance, ChatGPT accidentally leaked company secrets belonging to Samsung, leading to an internal ban on the tool. All in all, every second data security specialist indicates that governing GenAI adoption is a top challenge for data security, and many experts anticipate strict limits on AI use in the workplace.  

Understanding and being aware of the risks associated with AI usage can help organizations alleviate their GenAI-nxiety. So, let’s see the top GenAI integrations, their impact on data security, legal implications for businesses, and key elements of an AI security policy.

The Most Common GenAI Integrations

Here's a breakdown of the most common GenAI integrations with SaaS platforms, including a brief description of their functionality and typical access to SaaS data:

Adopting GenAI brings significant capabilities but also presents distinct security challenges. Unlike standard SaaS applications and integrations, GenAI tools often need extensive data access to operate effectively. Combined with the rapid advancements in the GenAI sector, this data access results in a complicated security environment that can expose organizations to potential vulnerabilities. Furthermore, US and international data security legislation is constantly evolving, leading to more strict requirements regarding using GenAI.  

Let’s see how generative AI affects data security in more detail.  

Generative AI Affects Data Security

Shadow AI and unsanctioned use

With the rise of GenAI tools, many are offered by lesser-known vendors, leading business units to experiment with multiple options to gauge their value. Free trials and appealing marketing often encourage employees to use these tools without IT’s approval, creating shadow AI issues that can obscure data exposure and complicate security enforcement.

Data everywhere

GenAI is notably data-hungry, with even minor features potentially requiring access to a broad spectrum of data within your SaaS ecosystem. This expansive access can widen the attack surface and raise the risk of data breaches. GenAI tools often need to access sensitive information from sources such as Zoom recordings, sales pipelines, instant messages, and customer records.

Overreliance and ethical considerations

73% of leaders worry about the ingress of inaccurate data produced by GenAI; every second expert is concerned about harmful, biased, or distressing output. While GenAi tools can produce creative and informative content, they can also generate content that is factually incorrect, inappropriate, or unsafe. Overreliance on GenAI can occur when it produces erroneous information and provides it in an authoritative manner. When people or systems trust this information without oversight or confirmation, it can result in a security breach, misinformation, miscommunication, legal issues, and reputational damage.

Privacy perils

There are many data privacy and security issues around GenAI. Many GenAI tools, especially free ones, collect user data for training purposes. Without careful scrutiny of privacy policies and data usage terms, organizations could unknowingly expose their data or even violate regulations like GDPR or HIPAA. Furthermore, GenAI models might inadvertently leak sensitive data during outputs, putting confidential information at risk. With these issues, half of security leaders are not sure about how AI will be regulated in different industries.  

Gen AI and Data Security & Protection Legislation

When considering regulatory compliance in the context of GenAI usage, the fundamental principles of data security legislation apply in the same manner as they do for processing personal data in any other context (e.g., the use of cloud services). So, while AI technology may be new, the principles and processes for risk assessment and compliance remain the same.

Generative AI and HIPAA compliance

HIPAA-covered entities can use these tools for any ePHI only after signing a Business Associate Agreement (BAA). In addition, HIPAA-covered entities must ensure that any ePHI provided will only be used for the purposes for which the covered entity engaged the business associate.  

Numerous GenAI tool providers in the healthcare sector offer Business Associate Agreements (BAA). For instance, Google has developed generative AI tools such as Med-PaLM 2, helping healthcare organizations improve administrative and operational processes. Med-PaLM 2 supports HIPAA compliance and is covered by Google’s business associate agreement. Grammarly also complies with HIPAA Security, Privacy, and Breach Notification rules. Organizations only need to use Grammarly for Business Enterprise plans and sign BAA with Grammarly.  

Discover more on HIPAA compliance in Google Workspace.

GenAI and PCI DSS compliance

Financial organizations can use GenAI tools in compliance with PCI DSS by ensuring these tools do not process, store, or transmit cardholder data. This involves implementing strict access controls, encrypting communications, and regularly monitoring and auditing the tools' usage. By following these practices, financial organizations can leverage the benefits of GenAI while maintaining PCI DSS compliance.  

For example, chatbots for customer service, such as OpenAI’s ChatGPT, can handle general customer inquiries without accessing sensitive payment data. Tools like IBM Watson can generate insights from financial data, ensuring no sensitive information is compromised. Not so long ago, Google launched Anti Money Laundering AI, an AI-based tool that detects suspicious, potential money laundering activity faster and more precisely with AI.

GenerativeAI and GDPR compliance

Organizations must consider several key GDPR compliance requirements when procuring generative AI services. Customers, suppliers, and employees have the right to know about the processing and handling of their data, so companies should manage this transparently and choose reliable AI solutions. The privacy policy should clearly outline details about AI usage.  

To use GenAI in compliance with GDPR, organizations must obtain the user’s consent or have a legitimate interest. Individual consent is not necessary for company-wide solutions like Gemini. Instead, companies should sign a Data Processing Agreement (DPA) or a contract with the AI provider. This agreement should cover details like subcontractors and the technical and organizational measures to ensure the data is well-protected.  

For example, when integrating ChatGPT into company processes, OpenAI as its provider would be considered a data processor. By Art. 28 GDPR, it is necessary to conclude a DPA to define the legal commitments regarding data protection. However, OpenAI does not provide DPA for the free versions 3.5 and 4 of ChatGPT, so it is impossible to use the free versions of ChatGPT compliantly. In contrast, organizations can use Firefiles in compliance with GDPR, as it offers DNA and takes all necessary data security and protection measures.  

Security Policy to Secure and Govern GenAI Usage

Even though GenAI is a relatively new trend, most organizations (90%) have already implemented policies to secure and govern GenAI adoption. This highlights a growing awareness of the need for proactive security measures in this evolving space. Why do organizations need the GenAI policy? A Generative AI policy assists organizations in managing the ethical, legal, reputational, security, and societal issues that arise from using GenAI systems. It promotes the responsible and accountable application of AI technology, protecting the interests of users, customers, and the wider community.  

When building out a GenAI policy, consider the following factors:

Ethical considerations GenAI

GenAI technologies can create highly realistic and convincing content, yet incorrect facts, inappropriate interpretations, or unsafe content is also possible. GenAI policy would prevent organizations from inadvertently creating or propagating harmful information. A GenAI policy helps ensure users follow a set of ethical guidelines, preventing the misuse of AI technology.

Approval and authorization

The use of GenAI tools should be assessed and approved based on their functionality, security, and alignment with business objectives. For example, one can choose to use GenAI for content creation, data analytics, and other relevant applications. The approval process involves evaluating the tool’s ability to meet organizational needs and compliance standards.   For example, when choosing a GenAI tool designed for automating financial reports and analyzing market trends, organizations must:

Regulatory compliance

Adhere to established security protocols, including robust data encryption, stringent access controls, and regular security audits. Ensure GenAI tools comply with relevant regulations such as GDPR, HIPAA, intellectual property laws, and industry-specific standards. Evaluate the tool for potential vulnerabilities and ensure it integrates securely with existing systems, protecting data integrity and privacy. Finally, users must comply with any use restrictions imposed by the GenAI tool being used (e.g., OpenAI’s Usage Policies, StabilityAI’s Prohibited Uses, etc.).

Data security

Generative AI models typically need extensive data to train effectively. To manage this, an organization's GenAI policy must focus on data privacy and security, ensuring that user data and sensitive information are handled responsibly and securely. The policy should outline procedures for data anonymization, obtaining consent, implementing access controls, and adhering to relevant data protection regulations.

Bias and fairness

AI systems can unintentionally reinforce or magnify biases in their training data. A GenAI policy should incorporate steps to identify and address biases, ensuring fairness and inclusivity. This could include focusing on diversity and representation during model training, routinely auditing models for biases, and establishing methods to correct and prevent biased outputs.  

For example, it is necessary to review the outputs generated by GenAI tools before relying on the information the outputs provide. Users are responsible for verifying the accuracy, truthfulness, and appropriateness of GenAI-generated content before using it in any final or published work product.

How Planet 9 Can Help

Planet 9 offers data security and compliance services, including developing comprehensive GenAI policies. By prioritizing data privacy and adhering to regulatory requirements, we help your business leverage modern technology responsibly. Contact Planet 9 for expert guidance in protecting your data and enhancing your AI capabilities.

Book a Free Consultation

Schedule a free consultation today to explore how Planet 9 can help you achieve your security and compliance goals.
Book Free Consultation

FAQs

How does a PTCISO service differ from hiring a full-time CISO?
A part-time CISO offers the same strategic oversight and expertise as a full-time CISO but on a flexible, cost-effective basis. It’s ideal for small to mid-sized businesses that need executive-level guidance without the overhead.
Is a virtual CISO service suitable for regulated industries like healthcare or finance?
Yes, virtual CISOs (or fractional CISOs) are especially valuable for industries with strict compliance requirements such as HIPAA, PCI DSS, or GLBA. They help ensure your organization meets regulatory standards and is prepared for audits.
What can I expect during a vCISO engagement?
Our vCISO service typically includes cybersecurity assessments, program development, compliance planning, incident response strategy, vendor risk management, and ongoing executive reporting tailored to your business.
How do I know if my business needs a CISO-as-a-Service?
If you lack in-house security leadership, struggle with compliance, or face growing cyber risks, a vCISO can fill that gap, providing strategic direction, improving resilience, and helping you make smarter security investments.

Related blog posts