LoginSell reports starting from $100 Sign Up Categ. Search

Will artificial intelligence destroy humanity?

Video Preview

Malevolent robots were once nothing more than a tired sci-fi subject, but they’re inching closer to reality, reported News.com.au (Australia).

Recently, a South Korean university’s move to create “killer robots” for the arms industry sparked alarm among artificial intelligence and robotics researchers across the world.

More than 50 researchers from 30 countries vowed to boycott all contact with the Korea Advanced Institute of Science and Technology, following reports it aimed to “develop artificial intelligence technologies to be applied to military weapons, joining the global competition to develop autonomous arms” — or robots capable of murder.

Many experts in artificial intelligence have hypothesised the notion of products of artificial intelligence destroying mankind.

But there’s one chilling thought experiment that takes the idea to its extreme — and for a while, it sent internet users into a meltdown.

‘THE MOST TERRIFYING THOUGHT EXPERIMENT EVER’
Years ago, a terrifying thought experiment called ‘Roko’s Basilisk’ argued an all-powerful robot from the future would punish everyone who didn’t help bring it into existence.

Once you know about the prospect of this robot coming into existence, it will torment you as punishment once it is eventually created.

Not even death can spare you from its inflicted eternal torture, because the superhuman Basilisk is intelligent enough to resurrect a digital copy of you — one that is indistinguishable from you, thinking and feeling exactly as you do — and torture that.

The theory argues that all it takes to make you a target is knowing about the robot — and not doing anything to help bring it into existence.

It’s kind of like an elaborate, more twisted version of “The Game” — a popular ironic processing game in which to think about “The Game” is to lose it.

The idea originated several years ago on an online forum called LessWrong, a forum for users to discuss artificial intelligence and techno-futurism in general.

The theory of Roko’s Basilisk ties together three concepts. Here’s a basic summary:

THE OBJECT: In the future, technology will create an advanced computer program that can automatically carry out tasks to create a perfect world.

THE OBSTACLE: This is not actually possible, because no matter how good something is, it can always be a tiny bit better. It’s like trying to count to infinity. The number can always go slightly higher.

THE BASILISK: Based on the impossibilities in making the world a perfect place, the “Basilisk” could seek to punish and destroy all the people who didn’t help it come about, and thus didn’t help contribute to a perfect world. They are imperfect, therefore, in the eyes of the machine, they cannot be part of the perfect world it is seeking to create.

This may sound insane. But Eliezer Yudkowsky, the founder of LessWrong and American AI researcher and academic, had a hysterical reaction to the whole thing.

“Listen to me very closely, you idiot,” he wrote in response to the original poster. “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

“You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.”

Yudkowsky went on to ban the post, saying it would “give people nightmares” and inspire mental breakdowns. He deleted his own post too, but it’s been copied to various threads online.

The idea of replicated digital cookies of ourselves — completely indistinguishable from our real selves — has been explored in popular culture for decades.

A prominent recent example is the Netflix series Black Mirror, which has devoted entire episodes to the concept of eternally torturing a digital “cookie” form of a person.

If it makes you feel better, despite Yudkowsky’s furious reaction — which he later admitted was “foolish” — the experiment has largely been derided in recent years.

Of course, this doesn’t mean AI doesn’t pose a threat.

Elon Musk, who supports all sorts of AI research, has previously warned it’s highly likely that AI will be a threat to humanity.

Last year he went as far as to warn that computer science would be the basis of World War III.

In 2015, thousands of AI researchers and scientists co-wrote a letter warning that autonomous weapons would be “feasible within years”.

They expressed their concerns that such weapons could prompt an “AI arms race”.

The late Stephen Hawking — who was a signatory to the letter — warned in a Reddit AMA thread last year that AI has the potential to “destroy” its human creators.

Speaking at the Web Summit in Lisbon last year, he said: “AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. In short, the rise of powerful AI will be either the best, or the worst thing ever to happen to humanity.”

Nick Bostrom, head of the University of Oxford’s Future Of Humanity Institute, likewise claimed that we may have just 50 years to save ourselves from artificial intelligence.

He said the scrabble to create super-intelligent minds could lead to mistakes with disastrous consequences.

To illustrate what might happen, Bostrom gave an example that’s disturbingly similar to Roko’s Basilisk, involving a machine that’s built to make paperclips.

He theorised that this machine may one day decide that humans are standing in the way of its mission, and destroy us all to enhance its own ability to churn out paperclips.

The academic said it was a “conservative assumption” that super-smart computers would be able to control “actuators”, which means armies of robots that do its bidding.

show source

Rating: (0)
Location: Show map

Use this report for only $100

BUY THE REPORT
AND DOWNLOAD NOW

OR

BID for exclusive rights
OK
Illegal content? Report
Reporter: Denes Osvalt
Sold 0 times
Category: Action, Business
Views: 237 times
Show chart

Comment report: