Popular Articles
Today Week Month Year


Google’s rush to win AI race has led to ETHICAL LAPSES
By Ramon Tomey // Apr 24, 2023

Google's rush to win the artificial intelligence (AI) race has led to ethical lapses on the company's part. Eighteen current and former workers at the Mountain View, California-based tech firm, alongside internal documents reviewed by Bloomberg, have attested to this.

According to the employees, the management asked them to test out Google's AI chatbot Bard in March, shortly before its public launch. One worker said the chatbot was "a pathological liar," while another dubbed it "cringe-worthy." A third employee posted his thoughts about Bard in an internal message group in February: "Bard is worse than useless; please do not launch."

The said note was viewed by nearly 7,000 people, with many agreeing that the chatbot's answers were contradictory or even egregiously wrong on simple factual queries. One employee recounted how Bard's advice about how to land a plane would lead to a crash. Another said Bard gave advice "which would likely result in serious injury or death" when asked about scuba diving.

The 18 current and former employees said the termination of AI researchers Timnit Gebru in December 202o and Margaret Mitchell in February 2021 was a turning point for Google's foray into AI – which culminated with Bard. Gebru and Mitchell, who co-led Google's AI team, left the search engine giant over a dispute regarding fairness in the company's AI research. Several other researchers subsequently left Google to join competitors – including Samy Bengio, who oversaw the work of Gebru and Mitchell.

Since then, employees have found it difficult to work on ethical AI at Google. According to one former Google staffer who asked to work for fairness in machine learning, they were routinely discouraged to the point that it affected their performance review. The employee's managers protested that the employee's work in AI was getting in the way of their "real work." (Related: Ex-Google engineer warns Microsoft's AI-powered Bing chatbot could be sentient.)

We are building the infrastructure of human freedom and empowering people to be informed, healthy and aware. Explore our decentralized, peer-to-peer, uncensorable Brighteon.io free speech platform here. Learn about our free, downloadable generative AI tools at Brighteon.AI. Every purchase at HealthRangerStore.com helps fund our efforts to build and share more tools for empowering humanity with knowledge and abundance.

Google unit responsible for ethical AI disempowered and demoralized

Google reiterated that responsible AI remains a top priority at the company. Company spokesman Brian Gabriel told Bloomberg: "We are continuing to invest in the teams that work on applying our AI principles to our technology."

The 18 current and former employees begged to differ, however. They said the Google unit that had been working on ethical AI, first established in 2018, is now disempowered and demoralized. Back in 2021, the Big Tech firm had pledged to double the headcount of the ethical AI team and pour more resources into it.

Now, staffers responsible for the safety and ethical implications of Bard and other new products have been admonished against impeding the development of generative AI tools in development. At least three members of the team were also included in the January mass layoff.

Google's pivot toward Bard at the cost of potential ethical issues was mainly spurred by the release of rival chatbot ChatGPT. Microsoft invested in the AI platform with a view to incorporating ChatGPT into its products, specifically Microsoft Office. Thus, Google is racing to beat Microsoft by incorporating Bard into Google Workspace.

A minority of employees are of the belief that the Mountain View, California-based tech firm has conducted sufficient safety checks with its new generative AI products, and that Bard is safer than competing chatbots. But now that Google has deemed the release of generative AI products as the No. 1 priority, ethics employees said it has become futile to speak up.

Signal Foundation President Meredith Whittaker, herself a former manager at Google, lamented how "AI ethics has taken a back seat. She warned: "If ethics aren't positioned to take precedence over profit and growth, they will not ultimately work."

Visit EvilGoogle.news for more stories about Google's Bard AI chatbot.

Listen to Elon Musk's warning about how Bard and other AI chatbots will be far more dangerous than nuclear weapons.

This video is from DaKey2Eternity channel on Brighteon.com.

More related stories:

Google shares lose $100 billion after new AI chatbot gives an incorrect answer in demo.

Google unveils new AI medicine robots: Are HUMAN doctors about to become obsolete?

Google CEO admits he DOESN'T UNDERSTAND how his company's AI chatbot Bard works.

Sources include:

Finance.Yahoo.com

Brighteon.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.