The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity. 

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.

Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’

‘In effect, this makes the containment algorithm unusable.’

Scroll down for video 

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity

AI has been fascinating humans for years, as we are in awe by machines that control cars, compose symphonies or beat the world’s best chess player at their own game.

However, with great power comes great responsibility and scientists around the world are concerned about the dangers that may come with super-intelligent systems.

Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.

The team used Alan Turning’s 1936 halting problem during the analysis, which focuses on the issue of whether or not a computer program will reach a the conclusion and answer the problem – this being the halt.

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations

 However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations

Or if the system will forever loop looking for an answer.

In their study, the team conceived a theoretical containment algorithm that ensures a super-intelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. 

But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

Based on these calculations the containment problem is incomputable, no single algorithm can find a solution for determining whether an AI would produce harm to the world.

Researchers also demonstrated that we may not even know when super-intelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

WHY ARE PEOPLE SO WORRIED ABOUT AI?

It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.

SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.

He believes super intelligent machines could use humans as pets.

Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

They could steal jobs 

More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.

And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.

As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.

A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade. 

They could ‘go rogue’ 

Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.

If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.

This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.

For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.

They could wipe out humanity 

Some people believe AI will wipe out humans completely.

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.

Musk warned that AI poses more of a threat to humanity than North Korea.

‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.

‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’

Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.

He has argued that controls are necessary in order protect machines from advancing out of human control

<!—->Advertisement

This post first appeared on Dailymail.co.uk

You May Also Like

MailOnline reveals exactly what we can see in NASA’s new James Webb images

In one of the most monumental days for science this century, NASA…

Britain’s energy shortage explained

More than a million households in England, Scotland and Wales will be paid…

Microsoft Calls on Governments, Companies to Cooperate to Fight Cybercrime

Microsoft MSFT -0.27% President Brad Smith said governments and companies are at…

Regulators have ‘tentatively approved’ a software fix for Boeing’s 737 Max airplane

The Wall Street Journal reports that the Federal Aviation Administration has “tentatively…