beginner 10 min read Feb 24, 2026

Privacy & Security

Platform-specific privacy walkthroughs

#privacy-security #topic

Privacy and Security

The Reality in Plain Language {#reality-in-plain-language}

Anything you type into an AI chat could be read by humans or used to train future models. The companies building these tools are not charities. They collect data to improve their products, and that includes your conversations.

Does this mean you should never use AI? No. It means you should use it with your eyes open. Think of it like email. You wouldn’t email your password to a stranger, but you also wouldn’t refuse to use email at all. The question is what’s appropriate to share, not whether to use the tool at all.

The rest of this section gives you specific, actionable guidance. Platform by platform. Setting by setting. What not to share and why. For detailed platform comparisons, see the Platform Breakdown.


Quick Reference: Default Data Policies {#default-data-policies}

PlatformDefault Training on ConversationsOpt-Out AvailableFree vs Paid Difference
ChatGPTYes for free users, no for Plus/Pro/TeamsYes (Plus and above)Paid tiers get zero data retention by default
ClaudeYes for free users, no for Pro/Team/EnterpriseYes (Pro and above)Paid tiers get zero data retention by default
Google GeminiYes for all usersPartial (depending on workspace type)Workspace admins have more control

Here’s what this means in practice: if you’re on a paid tier, your conversations are much more private by default. If you’re on a free tier, you need to be more careful. Either way, you should know what each setting actually does. The honest assessment: if you’re using AI for anything work-related or sensitive, the paid tier’s privacy default is worth it. See Cost Management & ROI for help evaluating whether paid tiers are worth it.


ChatGPT/OpenAI: Platform Walkthrough {#chatgpt-walkthrough}

Open the ChatGPT interface. Look for your name or profile picture in the bottom-left corner. Click it, then select Settings.

Data Controls Tab

“Improve the model” (Chat History & Training)

This is the big one. When enabled, OpenAI uses your conversations to train future models. When disabled, your conversations are not used for training. This setting is off by default for Plus, Pro, and Team subscribers. It is on by default for free users.

What this actually means: if you leave this on, a human reviewer might read your conversation. They’re supposed to redact sensitive information, but you shouldn’t rely on that. If you’re sharing anything sensitive, turn this off.

“Chat History”

This controls whether your conversations are saved in your account. If disabled, conversations are not saved and won’t appear in your history. They may still be used for training (that’s a separate setting).

Why you might disable this: if you’re working on something sensitive and don’t want a permanent record. Why you might keep it on: history is how AI maintains context across sessions and how you find useful past conversations.

“Share conversations”

This controls whether you can share individual conversations via a link. When you share a conversation, anyone with the link can view it. This is a separate setting from training - sharing doesn’t mean OpenAI trains on it, and training doesn’t mean your conversations are publicly shared.

Recommendation: leave this off unless you actively need to share something. It’s too easy to accidentally create a public link.

Account Tab

“Data Controls”

This takes you to a broader privacy dashboard where you can:

  • Download your data (all your conversations in a zip file)
  • Delete your account and all associated data
  • Manage whether your conversations are used for training

The export feature is worth knowing about if you ever want to leave the platform. The delete feature is nuclear. It erases everything.

What You Can’t Control

OpenAI retains some data for security and safety purposes regardless of your settings. This includes abuse detection, illegal content monitoring, and compliance with legal requests. They’re not secretive about this. It’s in their privacy policy. The difference is in whether your conversations are used to improve models, not whether they exist on OpenAI’s servers at all.

ChatGPT Plus ($20/month):

  • Zero data retention by default. Your conversations are not used for training unless you explicitly opt in.
  • No automatic training on any conversations, ever.

ChatGPT Team/Enterprise:

  • Even stronger protections. Team admins control all privacy settings.
  • Data is not used to train OpenAI’s models, period.
  • More granular controls over what team members can share.

The honest assessment: if you’re using AI for anything work-related or sensitive, the paid tier’s privacy default is worth it.


Claude/Anthropic: Platform Walkthrough {#claude-walkthrough}

Open Claude. Click your initials or profile picture in the top-right, then select Account Settings.

Data Usage Tab

“Use my data to improve Anthropic’s models”

When enabled, Anthropic uses your conversations to improve Claude. When disabled, your conversations are not used for training. This setting is off by default for Pro, Team, and Enterprise users. It is on by default for free users.

What this actually means: similar to OpenAI. If this is on, your conversations could be reviewed by humans as part of model training. If it’s off, they are not used for training purposes.

“Data retention”

This controls how long Anthropic keeps your conversations. For Pro users, conversations are retained for 30 days by default and then deleted. Enterprise users can negotiate different terms. Free users should assume longer retention.

Why this matters: shorter retention means less risk. If someone does gain unauthorized access to your account or there’s a data breach, there’s less to steal.

“Data deletion”

You can manually delete specific conversations from your chat history. This removes them from your account immediately. Note that Anthropic may still have copies on their servers for a short period for security and compliance purposes.

Account Privacy Tab

“Profile information”

This is standard account data: your name, email address, usage metrics. Not your actual conversations. This data is always retained for account management purposes.

“Integrations and connections”

If you’ve connected Claude to other services (Google Drive, Slack, etc.), this section shows what’s connected. You can revoke access here. See AI Already In Your Tools for more on AI integrations with your existing tools. Important to review periodically, especially if you use Claude for work.

What You Can’t Control

As with OpenAI, Anthropic retains some data for safety and security regardless of your settings. They monitor for abuse, illegal content, and safety violations. They also comply with legal requests. See Anthropic’s privacy policy for details. Privacy controls are about model training, not total anonymity.

Claude Pro ($20/month):

  • Zero data retention by default. Your conversations are not used for training.
  • 30-day retention then deletion (vs indefinite for free users).

Claude Team/Enterprise:

  • Stronger data protection agreements.
  • No training on team data.
  • Admin controls over what team members share.
  • Options for data residency (where your data is stored geographically).

The honest assessment: Anthropic’s privacy reputation is a major selling point. Their policy is more explicit about not training on user data by default than some competitors. If you care about privacy and are choosing between platforms, this is a point in Claude’s favor.


Google Gemini: Platform Walkthrough {#gemini-walkthrough}

This one is more complicated because Gemini is tied to your Google account, which means your privacy settings are spread across multiple places.

Gemini-Specific Settings

Open Gemini. Click the gear icon in the top-right, then select Settings.

“Gemini Apps Activity”

This is your conversation history. When enabled, your conversations are saved to your Google Account and used to improve Google’s products. When disabled, conversations are not saved.

Important nuance: even with this off, Google may retain conversations for up to 72 hours for safety purposes. After that, they’re deleted.

“Activity controls”

This link takes you to your broader Google Activity settings, which affect everything in your Google ecosystem, not just Gemini.

Google Account Activity Settings

Go to myaccount.google.com. Look for Data & privacy in the left sidebar. This affects Gemini plus everything else Google does.

“Web & App Activity”

This includes your Gemini conversations plus your searches, Google Maps usage, YouTube history, and more. Turning this off improves privacy but breaks personalization across Google products.

“Include Chrome history and activity from sites”

This gives Google even more data to work with. If you care about privacy, turn this off.

“YouTube History”

Unrelated to Gemini, but worth understanding as part of your overall Google privacy picture.

Workspace vs Personal Accounts

If you’re using Gemini through a Google Workspace account (for work or school), your administrator controls privacy settings. You may not be able to change them yourself.

Workspace Starter/Standard:

  • Admin controls whether conversations are used for training.
  • Admin controls data retention.
  • Generally more privacy-aware than personal accounts.

Workspace Enterprise:

  • Strongest protections.
  • Google does not use Workspace data to train consumer-facing models.
  • More granular admin controls.

The honest assessment: Google’s privacy situation is more complex because your data flows between products. If you’re already deep in the Google ecosystem, turning things off breaks functionality. If you care about privacy and are starting fresh, consider whether you want Gemini to be your entry point.


What NOT to Share (And Why) {#what-not-to-share}

Never Share These, Ever {#never-share}

Passwords, API keys, or login credentials Why: AI systems log conversations. If your password is in a conversation log, anyone who gets access to that log can access your account. This includes both the AI company and anyone who might breach their systems.

Social Security numbers, passport numbers, dates of birth, government IDs Why: These are keys to identity theft. Once exposed, you can’t “reset” them the way you can a password. The risk lasts forever.

Bank account or credit card numbers Why: Financial fraud. Plain and simple.

Full medical history or diagnostic information Why: Protected health information creates legal liability for the companies handling it. More importantly, you don’t know who might review conversations. Medical privacy is personal and serious.

Confidential client or work data Why: You may be violating contracts, NDAs, or legal obligations. Even if you think the data is anonymized enough, don’t risk it.

Personal information about children Why: Children’s privacy is legally protected in most jurisdictions. You don’t have the right to share this information, even about your own kids, in many contexts.

Be Careful With These

Real names and contact information Why: It depends on context. “Help me write an email to Sarah Jenkins, the VP of Marketing at Acme Corp” is probably fine. “Help me find Sarah Jenkins’s home address” is not. Use your judgment and anonymize when you can.

Proprietary business information Why: If it would matter to your company if competitors had it, don’t put it in an AI chat. Trade secrets, unreleased products, financials, strategic plans. These are not appropriate for external AI tools.

Legal documents or proceedings Why: Attorney-client privilege is a real legal concept. Putting confidential legal communications into a third-party system may undermine that privilege. If you’re unsure, ask an actual lawyer.

How to Anonymize

Replace names with roles or labels Instead of: “Should I fire John from sales?” Try: “Should I terminate an underperforming sales employee who has been with the company 18 months?”

Use placeholder numbers Instead of: “My company made $4.2 million last year” Try: “My company made revenue in the low seven figures last year”

** generalize specifics** Instead of: “We’re launching Project Manticore on March 15th” Try: “We’re launching a new product in Q1”

The AI doesn’t need the real specifics to give you useful guidance. It understands patterns and concepts.


Enterprise and Work Considerations {#enterprise-considerations}

If you’re using AI for work, you have more to think about than your personal privacy.

The Company Policy Question

Your company may already have rules about AI usage. For help evaluating AI tools for work, see How to Think About AI Tools. If they don’t, they will soon. Common restrictions include:

  • No inputting confidential or proprietary information
  • Approved tools only (your company may have enterprise agreements)
  • Required disclosure when AI is used for client deliverables

If you’re not sure, ask. “Can I use ChatGPT for work?” is a reasonable question. Using it without asking and realizing too late that you violated a policy is not.

Enterprise vs Consumer Tiers

Most major AI platforms offer enterprise options with stronger privacy protections:

ChatGPT Team/Enterprise:

  • No training on your data
  • Admin controls and audit logs
  • Compliance certifications (SOC 2, GDPR, etc.)
  • Higher price, but appropriate for business use

Claude Team/Enterprise:

  • Similar protections
  • Data residency options
  • Strong contractual guarantees

Google Workspace with Gemini:

  • Existing Workspace privacy terms apply
  • No training on Workspace data
  • Admin controls over usage

The honest assessment: if your company allows AI use, they should be paying for an enterprise tier. Using consumer tools for business data is asking for trouble.

AI-generated content raises legal questions that are still being worked out:

  • Copyright: who owns AI-generated content?
  • Disclosure: do you have to tell clients when you use AI?
  • Liability: who is responsible if AI gives bad advice?

Your company may have legal guidance on this. If you’re freelancing or running your own business, stay informed. The rules are evolving.


Practical Habits Worth Building

Treat AI Chats Like Email

You wouldn’t email anything to a stranger that you wouldn’t want published. Treat AI conversations the same way. Assume that anything you type could be read by someone else someday. Not because the companies are malicious, but because data breaches happen, mistakes happen, and you should default to caution.

Assume No Retention

Periodically delete conversations you don’t need. Most platforms let you do this manually. Get in the habit of cleaning up after particularly sensitive sessions. Yes, the data may still exist on their servers for a short period. But limiting your conversation history is still better than keeping everything forever.

Use the Paid Tier for Anything Important

The privacy defaults on paid tiers are significantly better. If you’re using AI for work, for anything sensitive, or simply enough that you care about privacy, pay for it. The $20/month is cheap compared to the risk of a serious privacy breach.

Separate Personal and Work

Don’t use the same account for personal experiments and work tasks. If you’re using AI seriously for your job, create a separate account or use your company’s enterprise account. This limits cross-contamination and makes it easier to manage privacy settings appropriately.

Periodically Review Your Settings

Privacy policies and default settings change. Schedule a time every few months to review your settings. Make sure training is still disabled if that’s what you want. Check what’s connected to your account. It takes ten minutes and is worth it.


The Bottom Line {#bottom-line}

AI tools are incredibly useful, but they are not private by default. The companies building them need data to improve their products, and that data comes from your conversations.

This doesn’t mean you shouldn’t use them. It means you should use them with intention:

  • Know what you’re sharing and why
  • Understand your platform’s privacy settings
  • Pay for better privacy defaults if you can
  • Never share anything that would cause real harm if exposed

The people who get in trouble with AI privacy are the ones who treat it like a private conversation with a friend. It’s not. It’s a conversation with a company, and that company has business reasons to pay attention to what you say.

Use the tools. Use them heavily. Just know what you’re doing.