Your AI Tools Are Watching You. Here Are the Settings To Change That.
The opt-outs exist. The companies just hope you won't go looking.
Have you checked your AI tools’ privacy settings lately?
Not the settings you chose when you signed up weeks or months ago. The settings that exist now, after the companies rewrote their policies, mostly without making headlines about it.
I went through them. What I found wasn’t surprising. It was worse than that -- it was boring. Routine corporate behavior dressed up in privacy-policy language designed to be skipped. The default in almost every tool is the same: your conversations, your prompts, your images go to the company and stay there. They improve their products with your work.
You can stop it. But you have to know where to look.
This is where to look …
What These Tools Are Actually Doing
Every major AI tool you use has a data collection policy. Most of them changed those policies in 2025. Most of the changes moved in the same direction: from “we don’t use your data for training” to “we do, unless you tell us not to.”
Here’s what the tools artists are most likely using are actually doing right now.
ChatGPT (OpenAI)
By default, ChatGPT uses your conversations to train its models. OpenAI gives you a way out, a toggle in your account settings, but you have to find it. If you haven’t touched this setting, you are contributing your prompts, your questions, your project descriptions, and your image uploads to OpenAI’s next model whether you intended to or not.
There’s a catch even if you opt out: OpenAI retains your conversations for 30 days regardless, citing “safety and abuse monitoring.” Zero is not on the menu.
Claude (Anthropic)
In August 2025, Anthropic announced that Claude would begin using conversations for training. Before that, it didn’t. The new policy gave users a choice: opt in or opt out; but the framing mattered: if you didn’t respond to the policy change, your silence was treated as consent. Anthropic began collecting data in October 2025.
Claude Free, Pro, Max consumer account plans:
If you opt out: 30-day retention, same as before.
If you stay opted in (or missed the window): five-year retention.
That’s not a typo. Five (5!!) years. Ugh!
Gemini (Google)
Gemini saves your conversations for 18 months by default. That’s adjustable down to three months or up to three years, but the 18-month default is where you start if you’ve never changed it. Google also notes that human reviewers may examine conversations as part of product improvement. This is a major reason I don’t use Gemini. You must make your own decision here.
There’s a newer feature worth knowing about: Temporary Chats. These disappear after 72 hours and are not used for training. If you’re working on something sensitive, a project pitch, client-facing creative work, anything you wouldn’t want Google to keep, Temporary Chat is the right mode.
Adobe / Lightroom / Firefly
This one matters most for photographers and visual artists, so it gets more space.
Adobe allows content analysis of files processed or stored through Creative Cloud and Document Cloud apps. The company says this analysis is used for product improvement, understanding how features are being used, what’s working, what isn’t. You can opt out at the account level.
What Adobe doesn’t advertise loudly: Firefly, Adobe’s generative AI image tool marketed as “ethically trained” and “commercially safe,” was partially trained on AI-generated images from competitors including Midjourney. This came out in a Bloomberg investigation in 2024. Adobe acknowledged that approximately 5% of Firefly’s training data came from AI-generated images produced by rival platforms.
For artists who chose Adobe precisely because of the ethical-training promise, that disclosure matters. It does appear Adobe’s exposure after the investigation may have influenced its emphasis on “commercially safe” model training and tighter integration within its own ecosystem. Looks like yet another giant company acting without permission and asking for forgiveness later.
The Settings: Here’s the Off Switch
This is the part that matters. The opt-outs are real. They work. They take less than five minutes per tool. Here’s exactly where to find them.
ChatGPT
Log in at chat.openai.com
Click your profile icon (top right corner)
Select Settings
Click Data Controls
Find “Improve the model for everyone”
Toggle it off
Done. New conversations from that point forward are not used for training. The setting applies to your entire account, on every device.
Note: This does not delete past conversations. If you want those gone, you’ll need to delete your chat history separately through the same Data Controls menu.
Claude
Desktop / Browser:
Click your name or profile icon
Select Settings
Click Privacy
Find “Help Improve Claude“
Toggle it off
Mobile:
Same path ‚ your name‚ Settings‚ Privacy ‚ Help Improve Claude ‚ off.
Once the toggle is off, new conversations are not used for future model training. Already-stored conversations and anything already in training pipelines may still be used, Anthropic’s policy is explicit about that. You’re changing what happens going forward, not erasing the past.
Gemini
To stop data collection going forward:
Go to gemini.google.com and sign in
Click the Activity icon (clock with arrow, top of left sidebar)
Find Gemini Apps Activity
Toggle it off
To use conversations that disappear automatically:
Look for the dotted chat icon next to the New Chat button. This starts a Temporary Chat, conversations in this mode expire after 72 hours and are excluded from training data by default.
For sensitive creative work, Temporary Chat is worth making a habit.
Adobe Creative Cloud - Personal Account
To opt out of content analysis:
Go to account.adobe.com/privacy and sign in
Find the content analysis option (listed under machine learning / product improvement)
Deselect or toggle off the permission
Adobe’s opt-out does not affect:
Content you’ve submitted to Adobe Stock
Any content analysis you’ve already consented to
The Honest Part
I want to be straight with you, because there’s a version of this piece that isn’t.
Opting out of training is not the same as opting out of data collection. These companies still retain your conversations after you opt out, for 30 days (OpenAI and Anthropic) or up to 72 hours for Temporary Chats (Google). The data passes through their servers regardless. The opt-outs tell them not to use your conversations to build future models. They don’t promise your data never touches their infrastructure.
If you want zero data collection, there’s only one answer: don’t send your data to their servers at all. Tools like Ollama, LM Studio, and Jan.ai run AI models locally on your machine, with no external connection unless you configure one. Nothing leaves your computer. No retention policy matters because there’s nothing to retain.
The trade-off is real. Local models are improving fast, but they are not ChatGPT. They require more setup. They require a machine with enough processing power to run them. For most artists, a local model is the wrong tool for everyday work, it’s a specialist choice for specific situations.
But for sensitive work, contract negotiations, client communications, business strategy, knowing the local option exists is worth something.
Midjourney, the exception worth naming
One tool I haven’t given you opt-out steps for: Midjourney. That’s not an oversight.
Midjourney has no opt-out for prompt training. When you type a prompt into Midjourney, that prompt goes into their training pipeline. There is no setting to change that. If you use Midjourney, your prompts train their model. That’s the product’s terms of service.
Midjourney earned a privacy rating of D+ (38 out of 100) from Common Sense Privacy Watchdog. I’m citing that number so you know it came from somewhere, not because I think grades tell the whole story. The story is: if prompt privacy matters to you, Midjourney is the wrong tool.
Where I Landed
I use AI tools every day: Claude, CoPilot, Brave Leo, NotebookLM. I’ve been a Google Gmail subscriber for over 20 years. I have written about reducing Google surveillance before and have locked down my personal account as tight as a snare drum. I am fully aware that using Google’s AI tools (and others) exposes me to their surveillance. My personal process is to minimize my footprint when using online tools as much as possible.
The only way we can eliminate surveillance of our activity online is to reduce our own online activity. Therefore, when I need an AI tool, my first choice is to find one that runs locally on my Mac or Windows 11 PC. I have sufficient expertise and time to tinker under the hood of my systems to make sure things work safely and securely. If you have similar expertise, this is the path forward. If not, stay tuned for more guides like this to help you in the future.
Which of these tools surprised you most and have you found a setting I missed? Tell me in the comments.
And if you want this kind of analysis in your inbox every two weeks — cybersecurity, AI tools, privacy, and the tech industry’s fine print translated into plain English for creative professionals — hit the Subscribe button.
Keep those online Nosey Neighbors guessing about what we are doing, because it’s none of their bee’s wax!



