AI pioneer warns humanity’s remaining timeline is only a few more years thanks to the risk that emerging AI tech could destroy the human race
By Cassie B. // Feb 22, 2024

Pioneering artificial intelligence researcher Eliezer Yudkowsky has warned that humanity may only have a few years left as artificial intelligence grows increasingly sophisticated.

Speaking to the Guardian, he told writer Tom Lamont: “If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."

Yudkowsky, who founded the Machine Intelligence Research Institute in California, is talking about the end of humanity as we know it. He said that the problem is that many people fail to realize just how unlikely humanity is to survive all this.

“We have a shred of a chance that humanity survives,” he cautioned.

Those are scary words coming from someone the CEO of ChatGPT creator OpenAI, Sam Altman, has identified as getting himself and many others interested in artificial general intelligence and being “critical in the decision to start OpenAI.”

Last year, Yudkowsky wrote in an open letter in TIME that most experts in the field believe “that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”

He explained that there will come a point when AI doesn’t do what people want it to do and does not care at all for sentient life. Although he thinks that type of caring could one day be incorporated into AI, at least in principle, no one currently knows how to do it. This means that people are fighting a helpless battle, one that he likens to “the 11th century trying to fight the 21st century.”

We are building the infrastructure of human freedom and empowering people to be informed, healthy and aware. Explore our decentralized, peer-to-peer, uncensorable Brighteon.io free speech platform here. Learn about our free, downloadable generative AI tools at Brighteon.AI. Every purchase at HealthRangerStore.com helps fund our efforts to build and share more tools for empowering humanity with knowledge and abundance.

Yudkowsky said that an AI that is truly intelligent will not stay confined to computers, pointing out that it’s now possible to email DNA strings to labs and have them produce proteins for you, which means an AI that is solely on the internet at first could “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

He has also explained that AI can “employ superbiology against you.”

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.,” he added.

Computer scientists have been warning since at least the 1960s that the goals of the machines we create will not necessarily align with our own.

Yudkowsky says the solution is to "shut it all down"

So how can we stop this? According to Yudkowsky, there is a lot that needs to be done. For example, an indefinite and global moratorium on carrying out new large training runs should be carried out, without any exceptions for militaries or governments, although it’s hard to imagine getting international cooperation on this matter from places like China.

He also thinks that large GPU clusters should be shut down. These are the big computer farms where the world’s most powerful AIs are trained and refined. Ceilings on the amount of computing power that can be used to train AI systems would also help, as long as they are revised downward in the future as training algorithms become more efficient.

Yudkowsky thinks that we should “be willing to destroy a rogue datacenter by airstrike.” He wrote that even nuclear exchange might be okay if it meant taking out AI, although he now says he would have used “more careful phrasing” on that particular point if he were to write the piece again.

Although some might accuse him of scaremongering or being sensational, the biggest-ever survey of AI researchers, which was released last month, revealed that 16% of them are convinced their work in AI will lead to the extinction of humankind.

Sources for this article include:

TheGuardian.com

Time.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.