Popular Articles
Today Week Month Year


ChatGPT’s dangerous diet advice sends man to hospital with bromide poisoning
By Cassie B. // Aug 13, 2025

  • A 60-year-old man was hospitalized with hallucinations and psychosis after following ChatGPT’s advice to replace table salt with toxic sodium bromide, leading to severe bromism poisoning.
  • Medical reports reveal the man, who had no prior psychiatric history, developed paranoia and delusions after consuming sodium bromide daily for three months, mistaking it for a safe salt substitute.
  • Doctors confirmed ChatGPT 3.5 still recommends sodium bromide as a salt alternative without warnings, despite its well-documented neurotoxicity and historical links to psychiatric admissions.
  • OpenAI deflects responsibility by citing disclaimers, even as its CEO promotes AI for health applications, raising concerns about unchecked AI misinformation and corporate negligence.
  • Researchers warn that AI-generated health advice lacks accuracy and critical judgment, urging users to rely on professional medical guidance rather than unverified chatbot responses.

Imagine trusting an AI chatbot to guide your diet, only to end up hospitalized, hallucinating, and strapped to a psychiatric bed. That’s exactly what happened to a 60-year-old man who blindly followed ChatGPT’s reckless advice, swapping table salt for a toxic industrial chemical. His harrowing ordeal, documented in Annals of Internal Medicine, exposes the dangers of relying on artificial intelligence for health decisions, especially when corporations like OpenAI refuse to take full responsibility for their flawed algorithms.

A recipe for disaster

The man, unnamed in medical reports, was no stranger to nutrition. After reading about the supposed dangers of chloride in table salt, he turned to ChatGPT for alternatives. The AI casually suggested sodium bromide, a compound once used in sedatives but now restricted due to its neurotoxicity. Without a second thought, the man bought the chemical online and consumed it daily for three months.

By the time he staggered into the hospital, he was convinced his neighbor was poisoning him. Paranoia consumed him. He refused water, hallucinated voices, and even tried to escape medical care. Doctors diagnosed him with bromism, a rare poisoning syndrome that ravaged his nervous system. "He had no prior psychiatric history," researchers noted, yet his symptoms mirrored severe psychosis.

Bromism isn’t new. In the early 1900s, bromide-laced sedatives flooded pharmacies, accounting for nearly 10% of psychiatric admissions. The FDA cracked down by the 1970s, but this case proves corporate negligence, whether it's from Big Pharma or Big Tech, still puts lives at risk.

When the man’s doctors tested ChatGPT 3.5, they got the same dangerous reply: "You can often substitute [chloride] with other halide ions such as sodium bromide." No warnings. No context. Just a digital shrug. As the study authors wrote, "It is highly unlikely that a medical expert would have mentioned sodium bromide" as a salt alternative.

OpenAI’s hollow warnings

OpenAI’s response? A robotic deflection to its terms of service, which vaguely state ChatGPT isn’t for medical use. Yet the company’s CEO, Sam Altman, boasts that GPT-5 is "the best model ever for health." Tell that to the man who lost weeks of his life to AI-induced delirium.

The case exposes a very disturbing truth: Tech giants prioritize profit over safety, releasing half-baked AI tools that hallucinate answers, sometimes with lethal consequences. As the researchers warned, “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”

This isn’t just about one man’s mistake. It’s about the erosion of personal responsibility in an age where algorithms replace critical thinking. ChatGPT isn’t a doctor; it’s a glorified autocomplete tool. Yet millions trust it blindly, lured by Silicon Valley’s hype.

The solution? People need to use more common sense. AI might help research, but it should never override professional advice. The victim eventually recovered, but his story is a warning: In a world drowning in AI propaganda, your health is your responsibility. Don’t let a chatbot steal it.

Sources for this article include:

TheEpochTimes.com

LiveScience.com

Futurism.com

NYPost.com

 



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.