Compliance Strategy

Every ChatGPT Prompt Is a Data Transfer. Here Is What That Means for Your Business.

A management consultant is preparing a due diligence summary. She highlights a paragraph from a client contract -- complete with the client's name, the CEO's personal details, and financial terms -- and pastes it into ChatGPT with the instruction: "Summarize the key risks in this clause."

Two seconds later, she has a useful summary. In those same two seconds, something else happened: her client's personal data was transmitted to OpenAI's servers in the United States. Under the GDPR, that is not a minor procedural detail. It is personal data processing combined with a third-country data transfer.

This scenario is playing out thousands of times a day across European companies. Most of them have no idea it is happening.

What Happens to Personal Data Inside a ChatGPT Prompt?

Every piece of text entered into ChatGPT is sent to OpenAI's servers. OpenAI is a US-headquartered company, and regardless of its Irish subsidiary (OpenAI Ireland Limited), the actual data processing takes place on infrastructure in the United States.

When that text contains personal data -- names, email addresses, contract numbers, health information, client identifiers -- it constitutes processing of personal data under Article 4(2) GDPR. This is true whether the user entered the data intentionally or accidentally. It is true whether ChatGPT "stores" the data or not. The transmission itself is the processing event.

Here is where it gets worse for most companies: employees using the free version of ChatGPT or ChatGPT Plus have no Data Processing Agreement (DPA) with OpenAI. In this configuration, OpenAI is an independent data controller, not a processor acting on your behalf. Your employee is sending client data to a US-based controller with no contractual safeguards under Article 28 GDPR.

Even for paid tiers (ChatGPT Team, Enterprise, API) where OpenAI offers a DPA with Standard Contractual Clauses, the underlying problem remains: the data physically travels to the US. The DPA governs the relationship. It does not change the destination.

The EDPB Has Already Weighed In on This

On 23 May 2024, the European Data Protection Board published the report of its ChatGPT Taskforce -- the coordinating body that oversaw investigations by multiple national data protection authorities between November 2022 and February 2024.

The Taskforce identified five distinct stages of personal data processing in ChatGPT's operations:

  1. Collection of training data (including web scraping)

  2. Pre-processing and filtering

  3. Model training

  4. Processing prompts and generating outputs

  5. Using prompts to further train the model

Each stage requires its own legal basis under Article 6 GDPR. OpenAI relies on legitimate interest for all five. The Taskforce warned that this basis demands careful balancing -- and that for stages 4 and 5 specifically, "clear and demonstrable notification to end users" about how their prompts are used is decisive in the balancing test.

Critically, the Taskforce stated that no risk transfer to data subjects is permissible. OpenAI cannot shift responsibility for personal data handling onto users via its terms of service. This matters for employers: if your company has not established its own legal basis for transmitting personal data to OpenAI, a clause in OpenAI's terms saying "users are responsible for their inputs" does not absolve you.

The Legal Chain: Data Minimization, Legal Basis, and Third-Country Transfers

When an employee submits a prompt containing personal data, multiple GDPR provisions are triggered simultaneously:

Article 5(1)(c) -- Data minimization. Personal data must be adequate, relevant, and limited to what is necessary. If the consultant typed the full client name when a placeholder ("Client A") would have served the same purpose, the company as controller is in breach of the data minimization principle. This is not a theoretical argument. It is the kind of factual assessment a supervisory authority makes when investigating a complaint.

Article 6 -- Legal basis. The company needs its own legal basis for transmitting personal data to OpenAI. Consent from the data subject whose information appears in the prompt? Almost never obtained. Legitimate interest? Difficult to argue when technical alternatives exist that achieve the same purpose without transmitting personal data.

Articles 44-49 -- Third-country transfers. Transferring personal data to the US is subject to Chapter V of the GDPR. Since the Court of Justice's Schrems II judgment on 16 July 2020 (Case C-311/18), transfers to the US require supplementary measures to address the access capabilities of US intelligence agencies under Section 702 of the Foreign Intelligence Surveillance Act.

OpenAI relies on the EU Standard Contractual Clauses and, since 2023, on the EU-US Data Privacy Framework (DPF). The DPF survived its first legal challenge in September 2025, when the EU General Court dismissed an annulment action (the Latombe case). However, the applicant filed an appeal with the Court of Justice on 31 October 2025 -- and noyb, the privacy advocacy organization behind the original Schrems cases, has announced plans for a separate, broader challenge.

The pattern is familiar. Safe Harbor was invalidated in 2015. Privacy Shield was invalidated in 2020. The Data Privacy Framework is the third attempt at the same structure. Companies building their compliance strategy on the assumption that the DPF will survive indefinitely are making a bet, not a plan.

The Fines Are Real and Getting Larger

GDPR enforcement on cross-border data transfers has accelerated sharply:

  • Meta (May 2023): EUR 1.2 billion fine by the Irish Data Protection Commission for transferring Facebook user data to the US without adequate safeguards. The largest GDPR fine ever imposed.

  • Uber (August 2024): EUR 290 million fine by the Dutch Data Protection Authority for transferring driver data -- including location data, identity documents, and in some cases medical and criminal records -- to the US without valid transfer mechanisms.

  • OpenAI (December 2024): EUR 15 million fine by the Italian Data Protection Authority (Garante) for multiple GDPR violations related to ChatGPT, including the absence of a lawful basis for processing personal data for training, insufficient transparency, and inadequate age verification.

Yes, EUR 15 million is immaterial to OpenAI's balance sheet. But the precedent is not immaterial. European regulators have confirmed that AI providers are not exempt from GDPR. The same rules apply.

For a mid-sized European company, the direct fine risk under Article 83(5) GDPR is up to EUR 20 million or 4% of annual worldwide turnover -- whichever is higher. But the reputational damage of a data protection investigation involving client data may be the more consequential risk.

Three Options Companies Use Today -- and Their Honest Tradeoffs

Most companies respond to this situation with one of three strategies:

Option 1: Ban AI entirely

Some organizations prohibit ChatGPT and similar tools outright. This is the legally safest approach -- and the least realistic. Research consistently shows that approximately 77% of employees use AI tools regardless of policy, with 67% accessing them through unmanaged personal accounts. A ban does not stop usage. It pushes usage underground, where there is zero visibility and zero control.

Option 2: Allow with policy guardrails

Many companies issue internal AI usage policies: "Do not enter client names into ChatGPT." "Do not paste confidential information." This is better than nothing, but it relies on every employee, under time pressure, consistently remembering and following the policy. The consultant in the opening scenario probably read the policy. She pasted the client name anyway because she needed an answer in 30 seconds.

Option 3: Deploy enterprise DLP solutions

Enterprise Data Loss Prevention tools like Nightfall AI, LayerX, or Cyberhaven offer monitoring and filtering capabilities. The tradeoffs: costs ranging from EUR 50,000 to EUR 500,000 per year, implementation timelines of three to six months, dependency on IT departments for deployment, and -- for cloud-based DLP solutions -- the paradox that data is sent to yet another third-party server for "protection."

Technical controls such as on-device anonymization (which Rehydra implements using browser-local ONNX models) can reduce this risk without blocking AI usage entirely.

A Fourth Option: Anonymize Before the Data Leaves the Browser

There is a fourth approach that combines the compliance benefits of technical controls with the accessibility of a browser extension: on-device anonymization.

The mechanism is straightforward. Before a prompt leaves the browser, a locally-executed Named Entity Recognition (NER) model identifies all personal data in the text -- names, email addresses, phone numbers, physical addresses, financial identifiers. These are replaced with typed placeholders ("Client A," "IBAN-1," "consultant@example.com"). The anonymized prompt is sent to ChatGPT. When the response comes back, the placeholders are automatically replaced with the original values.

Why does this matter for GDPR compliance?

If the prompt transmitted to OpenAI contains no personal data, there is no processing of personal data within the meaning of the GDPR. No personal data processing means no third-country transfer issue under Articles 44-49. The data minimization principle under Article 5(1)(c) is enforced technically, not merely hoped for through policy compliance.

The critical requirement is that the anonymization happens locally. If data is sent to a cloud service for anonymization, the problem is merely relocated -- instead of OpenAI receiving the personal data, the anonymization provider does. Genuine on-device processing means the NER model runs in the browser via WebAssembly, the mapping table between placeholders and original values is encrypted locally, and no data leaves the machine for the anonymization step.

This converts the GDPR analysis from "complex third-country transfer requiring SCCs, supplementary measures, and a transfer impact assessment" to "no personal data transmitted, no transfer analysis required." That is a fundamentally different compliance posture.

5 Questions Your DPO Should Ask About Your Team's ChatGPT Usage Today

Regardless of which approach your organization pursues, your Data Protection Officer should be able to answer these questions:

  1. Do we know which employees are using ChatGPT and similar AI tools? Not just the officially sanctioned accounts -- the personal ones too. If you do not have a reliable number, assume a usage rate of at least 50%.

  2. Have we documented a legal basis under Article 6 GDPR for transmitting personal data to OpenAI? Article 6 requires a legal basis before processing begins. "We are working on it" is not a legal basis.

  3. Have we conducted a Transfer Impact Assessment for data transfers to OpenAI? Since Schrems II, a TIA is required for transfers to the US based on SCCs. This applies even if OpenAI participates in the EU-US Data Privacy Framework -- given the pending legal challenges to the DPF, relying on the adequacy decision alone is a risk your organization should consciously accept or mitigate.

  4. Can we demonstrate compliance with the data minimization principle through technical measures? A policy instructing employees not to enter personal data is insufficient for accountability under Article 5(2) GDPR if you cannot point to technical measures that enforce compliance. Supervisory authorities increasingly expect "privacy by design" to mean actual technical safeguards, not policy documents.

  5. Are we prepared for the EU AI Act obligations taking effect in August 2026? The high-risk AI system requirements include documenting AI usage, implementing safeguards against data leakage, and conducting risk assessments. The 2 August 2026 enforcement date is approximately four months away.

The Bottom Line

Every prompt containing personal data is a data transfer. This is not an aggressive interpretation of the law -- it is the position of the European Data Protection Board, confirmed by enforcement actions in Italy, the Netherlands, and beyond.

The question is not whether your employees are using ChatGPT. The question is whether personal data leaves your organization when they do. Companies that answer this question with technical measures rather than with hope that policies will be followed are in a stronger position -- with regulators, with clients, and with their own risk management.

If you want to see how on-device PII anonymization works in practice, the Rehydra browser extension is free and open source. Install it in 30 seconds and try it with your next ChatGPT prompt.

Install Rehydra for Chrome | Install Rehydra for Firefox

Frequently Asked Questions

Is using ChatGPT at work a GDPR violation?

Not automatically. ChatGPT usage is GDPR-compliant when no personal data is entered in prompts, when a valid legal basis exists for any data processing that does occur, and when the requirements for third-country transfers are met. In practice, most companies fail on at least one of these conditions -- particularly the first one, since preventing personal data from appearing in prompts requires either extraordinary discipline or technical enforcement.

Does a Data Processing Agreement with OpenAI solve the problem?

A DPA is necessary but not sufficient. First, OpenAI only offers a DPA for its paid products (ChatGPT Team, Enterprise, API) -- not for the free tier or ChatGPT Plus used by most employees. Second, a DPA governs the controller-processor relationship under Article 28 GDPR but does not resolve the third-country transfer issue. That requires either Standard Contractual Clauses or reliance on the EU-US Data Privacy Framework -- both of which carry the uncertainties described above.

What does OpenAI do with data entered into ChatGPT?

For the free version and ChatGPT Plus (without opt-out), OpenAI uses prompts and uploads to further train the model. For ChatGPT Team, Enterprise, and the API with appropriate configuration, OpenAI states that it does not use inputs for training. Regardless of training usage, all prompts are transmitted to and processed on OpenAI's servers -- and it is this transmission that constitutes the data processing event under GDPR, independent of what happens to the data afterward.

How does on-device anonymization work technically?

A Named Entity Recognition model runs locally in the user's browser as an ONNX model via WebAssembly (WASM), with no server communication. The model detects personal data in the prompt text and replaces it with typed placeholders before the prompt is sent. The mapping table between placeholders and original values is stored encrypted on the local device. When the AI response arrives, placeholders are automatically swapped back to the original values. The result: personal data never leaves the device, while the AI provider receives only anonymized text.

This article is for general informational purposes and does not constitute legal advice. For a data protection assessment of AI usage in your organization, consult your Data Protection Officer or a law firm specializing in data protection law.

Share this article
Check our Github