Popular Articles
Today Week Month Year


Supposedly “private” ChatGPT conversations LEAKED in Google Search
By Ava Grace // Aug 05, 2025

  • Private conversations from OpenAI's ChatGPT were exposed in Google search results due to a now-disabled "discoverable" feature, revealing sensitive topics like mental health struggles and abuse confessions.
  • An experimental opt-in setting allowed users to share chats publicly. But unclear warnings led many to unknowingly expose private conversations to web searches, with content remaining intact and searchable.
  • Thousands of intimate discussions covering abuse, addiction and workplace issues were leaked, highlighting risks of treating AI as a confidential therapist or advisor.
  • The company removed the feature and is working to scrub indexed chats, but critics argue the opt-in design was poorly communicated, leaving lasting privacy risks due to Google's caching.
  • The incident underscores the lack of guaranteed confidentiality in AI interactions, urging users to treat chatbots like public platforms and assume no privacy in unregulated digital spaces.

In an alarming breach of digital privacy, private conversations held with OpenAI's ChatGPT recently surfaced in Google search results.

Journalist and privacy advocate Luiza Jarovsky first revealed the leak. She wrote that sensitive discussions – ranging from mental health struggles to confessions of abuse – were accessible with a simple Google search. According to Jarovsky, the exposure occurred due to a now-disabled feature that allowed users to mark chats as "discoverable" – inadvertently making deeply personal exchanges searchable online.

When generating a shareable link, an unchecked box labeled "make this chat discoverable" appeared, warning that the exchange would appear in web searches. Many users, unaware of the implications, may have clicked the option without realizing their chats would be indexed by Google. Some likely assumed the setting was necessary to share links with friends, not grasping that their private thoughts would be exposed to the world. (Related: ChatGPT can figure out your personal data using simple conversations, warn researchers.)

OpenAI swiftly removed the feature after acknowledging the risk. Though it stripped identifying details from the leaked chats, the content itself remained intact – meaning raw, unfiltered discussions were suddenly available to anyone with an internet connection. The incident has nevertheless reignited debates over AI privacy, corporate responsibility and the dangers of trusting sensitive matters to algorithms.

The human cost of AI oversharing: A cautionary tale

The fallout was immediate. Searches revealed thousands of conversations, some containing intimate details about relationships, addiction and even workplace grievances.

One user sought advice on handling an abusive partner, while another confessed to past misconduct. These were not hypothetical musings; they were real people's vulnerabilities, now floating in the digital ether.

The incident underscores a growing trend. Individuals increasingly turn to AI for therapy-like support, career advice and personal dilemmas, often under the mistaken assumption of confidentiality.

Yet, as OpenAI CEO Sam Altman has previously warned, no legal framework guarantees privacy in AI interactions. Users are, in effect, trusting corporations with their secrets – a risky proposition in an era of rapid technological experimentation.

OpenAI Chief Information Security Officer Dane Stuckey confirmed the feature's removal, calling it a "short-lived experiment" that created too many risks. The company is now working with search engines to scrub indexed conversations, but the damage may already be done. Once data enters Google's cache, it can linger in archives, screenshots or third-party sites long after deletion.

Critics argue that OpenAI's opt-in design was insufficient. The warning text that reads "Anyone with the URL will be able to view your shared chat" did not clearly convey that chats could appear in Google searches. For a platform used by millions, ambiguity in privacy settings is unacceptable.

This is not the first time AI tools have mishandled personal data. In 2023, an Amazon Alexa glitch sent private recordings to the wrong users. Earlier this year, Google Bard – the predecessor of the search engine giant's Gemini AI – faced scrutiny for retaining user inputs longer than advertised.

Each incident reinforces a sobering truth: Convenience often comes at the cost of control. For now, OpenAI advises users to audit their shared links via ChatGPT's settings and delete any unwanted exposures. In an unregulated digital landscape, privacy is never guaranteed.

Watch Brother Nathanael Kapner issuing a stern warning against ChatGPT in this clip.

This video is from the jonastheprophet channel on Brighteon.com.

More related stories:

Italy bans ChatGPT over privacy concerns.

Google updated its privacy policy so it can use all your data to train AI.

UNESCO: Combination of neurotechnology and AI threatens mental privacy.

Sources include:

ReclaimTheNet.org

IndianExpress.com

PCWorld.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.