P-CATION Logo
Back to blog

Using ChatGPT in Companies: What Can Go In and What Must Never Go In? (GDPR Checklist)

Using ChatGPT in companies: this GDPR checklist shows which data is allowed, what is off-limits, and how to use AI in a controlled way.

Published:

Updated:

Author: P-CATION Redaktion

IT security & AI regulation Governance and processes GDPR compliance Data classification Handling employee and customer data Prompt/input guidelines
Illustration of an AI chat split into a red risk side and a green approved side with privacy and secure-use icons AI-generated image

Many teams already use ChatGPT. Often spontaneously, often without rules. That is exactly where the gray zone begins: the biggest risk is not “using AI” itself, but copying data into it without control because it is fast and helpful in the moment.

If you are asking, “What can we put into it?”, you are already ahead of many companies.

The real problem: ChatGPT is useful, but not automatically company-safe

ChatGPT is excellent for wording, structure, ideas, and summaries. That is exactly why it quickly becomes part of daily work in administration, sales, marketing, or leadership.

The problem does not start with the feature set. It starts with the input. As soon as sensitive information is pasted in, data leaves the company’s controlled environment. At that point, this is no longer just about productivity. It becomes a question of privacy, confidentiality, and compliance.

Quick rule of thumb

If you would not send something by email to an external address, it does not belong in a public AI chat either.

This rule is not a full legal assessment, but in day-to-day business it is surprisingly reliable.

GDPR checklist: What is usually okay

The following types of content are often lower risk as long as they cannot be traced back to real people, customers, or internal operations:

  • public information
  • anonymized examples such as “Customer A”, without order numbers, names, or locations
  • wording and structure help without original data, for example: “Write a polite reply for this situation”
  • general brainstorming, outlines, and idea drafts
  • summaries of information that is already public

GDPR checklist: What is not okay

The following content should not be copied into public AI chats:

  • personal data relating to customers, employees, or applicants
  • offers, prices, commercial terms, and contracts
  • internal documents, process details, meeting notes, or credentials
  • support cases with identifiable customer information
  • anything covered by NDA, confidentiality, or special secrecy obligations

In short: if content is business-critical, personal, or confidential, it does not belong in a freely used AI tool.

Three building blocks every company needs

If you want to regulate this properly, you almost always end up with the same three building blocks:

  1. Define data classes
    Use a simple traffic-light model: green, yellow, red.
  2. Write clear rules
    Employees need to know what is allowed, what is forbidden, and what must be checked before use.
  3. Assign responsibilities
    AI needs owners inside the company, not just users.

That turns gut feeling into a controllable standard.

What this means in practice

Instead of letting each person decide individually what data can be pasted into ChatGPT, create a framework that is transparent and enforceable:

  • Which tools are officially approved?
  • Which data types may be processed?
  • Who reviews exceptions?
  • When is human approval mandatory?

That lowers risk without killing the value of AI.

If you want AI to do more than just writing help

As soon as AI is supposed to support more than wording, for example in customer service, sales, or internal knowledge access, a public chat tool is usually no longer enough.

Then you need a solution that works in a controlled way:

  • only with approved information
  • with clear access rights
  • with traceable sources
  • with a privacy and compliance framework that is actually practical for SMEs

The solution: LIVOI as the official AI assistant for your company knowledge

LIVOI is designed to make AI usable in companies in a controlled way instead of pushing employees toward improvised point solutions.

1. Data stays in Germany

Data remains permanently in German data centers or, if required, directly on your own premises.

2. No model training with customer data

Data from LIVOI is not intended for training or improving AI models, including connected model providers.

3. GDPR processes and DPA under Art. 28 are possible

A DPA under Art. 28 GDPR can be provided on request. Processes are designed around common privacy requirements such as data subject rights, DPIAs, and incident handling.

4. Concrete safeguards instead of vague promises

Hosting in Germany with EU data residency, encryption via TLS 1.2+ and AES-256, and documented subprocessor governance create a solid compliance framework.

5. Compliance orientation with the EU AI Act in mind

LIVOI is designed so companies can use AI in a controlled, GDPR-oriented way and in line with the spirit of EU AI regulation: with clear responsibilities, traceable information flows, and structured usage in day-to-day work.

What this changes in everyday work

Instead of: employees copy customer data into an AI chat because they need a quick answer.
Better: LIVOI provides answers from approved sources, with clear rules and access rights.

Instead of: “I spend 15 minutes looking for the right file or piece of information.”
Better: ask questions and get answers based on your approved company knowledge.

If you use ChatGPT in your company, the most important question is not whether you use AI, but how controlled that usage is.

If you want a solution where data flows, hosting, and safeguards are clearly described and built for SMEs, take a look at LIVOI.

Get to know LIVOI now