Amazon’s new generative AI assistant “Q” has “severe hallucinations” and leaks confidential data
By Cassie B. // Dec 08, 2023

Amazon’s fledgling generative AI assistant, Q, has been struggling with factual inaccuracies and privacy issues, according to leaked internal communications.

The chatbot was recently announced by Amazon’s cloud computing division and will be aimed at businesses. A company blog post says it was built to help employees write emails, troubleshoot, code, research and summarize reports and will provide users with helpful answers that relate only to the content that “each user is permitted to access.”

It was promoted as a safer and more secure offering than ChatGPT. However, leaked documents show that it is not performing up to standards, experiencing “severe hallucinations” and leaking confidential data.

According to Platformer, who obtained the leaked documents, one incident was flagged as “sev 2.” This designation is reserved for events deemed serious enough to page Amazon engineers overnight and have them work on the weekend to correct them. The publication revealed that the tool leaked unreleased features and shared the locations of Amazon Web Services data centers.

One employee wrote in the company’s Slack channel that Q could provide advice that is so bad that it could “potentially induce cardiac incidents in Legal.”

An internal document referring to the wrong answers and hallucinations of the AI assistant noted: “Amazon Q can hallucinate and return harmful or inappropriate responses. For example, Amazon Q might return out of date security information that could put customer accounts at risk.”

We are building the infrastructure of human freedom and empowering people to be informed, healthy and aware. Explore our decentralized, peer-to-peer, uncensorable Brighteon.io free speech platform here. Learn about our free, downloadable generative AI tools at Brighteon.AI. Every purchase at HealthRangerStore.com helps fund our efforts to build and share more tools for empowering humanity with knowledge and abundance.

These are very worrying problems for a chatbot that the company is gearing toward businesses who will likely have data protection and compliance concerns. It also doesn’t bode well for the company in its quest to prove that it is not falling behind its competitors in the AI sphere, like OpenAI and Microsoft.

Amazon has denied that Q leaked confidential information. A spokesperson for the company noted: "Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon. No security issue was identified as a result of that feedback.

The company said it became interested in developing Q in response to many companies banning AI assistants from being used for business out of privacy and security concerns. It was essentially built to serve as a more private and secure alternative, and these leaks indicate that they are failing to meet their objectives with this project.

AI chatbots are prone to hallucinations

Q is far from the only generative AI chatbot to encounter major issues like hallucinations, the term given to the tendency for AI models to present inaccurate information as facts. However, experts suggest this characterization is not accurate as these language models are trained to provide plausible-sounding answers to prompts from users rather than correct ones. As far as the models are concerned, any answer that sounds plausible is acceptable, whether it is factual or not.

Although some companies have taken steps to keep these hallucinations under control to some extent, some computer scientists believe that this is a problem that simply cannot be solved.

When Google unveiled its ChatGPT competitor Bard, it provided a wrong answer to a question about the James Webb Space Telescope during a public demo. In another high-profile incident, the tech news site CNET had to issue corrections after an article it wrote using an AI tool provided highly inaccurate financial advice to readers. On another occasion, a New York lawyer got in trouble after using ChatGPT to conduct legal research and he submitted a brief with a series of cases that the chatbot invented.

There are so many ways that relying on this technology can go wrong, particularly when people use answers from chatbots to make decisions about their health, finances, who to vote for and other sensitive topics.

Sources for this article include:

Futurism.com

Platformer.news



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.