Can we control super-intelligent AI? Scientists think not
While progress is being made in artificial intelligence, scientists and philosophers have warned of the dangers of an incontrollable super-intelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Centre for Humans and Machines at the Max Planck Institute for Human Development, have revealed that it would not be possible to control a super-intelligent AI.
Theoretically, if someone were to program an AI system with intelligence superior to that of humans so it could learn independently, the AI could have access to all the data of humanity if it was connected to the internet. It could replace all existing programs and take control of all machines online worldwide.
Computer scientists and philosophers have also questioned whether we could control a super-intelligent AI, to ensure it does not pose a threat to humanity. An international team of computer scientists used theoretical calculations to determine that it would be fundamentally impossible to control a super-intelligence AI.
“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently, without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” said Manuel Cebrian, study co-author and leader of the Digital Mobilisation Group at the Centre for Humans and Machines.
Scientists have explored two different ideas for how a super-intelligent AI could be controlled. In one scenario, the capabilities of super-intelligent AI could be specifically limited by walling it off from the internet and other technical devices. However, this would make the super-intelligent AI significantly less powerful.
Alternately, the AI could be motivated from the outset to only pursue goals that are in the best interests of humanity, by programming ethical principles into it. However, researchers have shown that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.
In their study, the researchers conceived a theoretical containment algorithm that ensures a super-intelligent AI cannot harm under any circumstances, by simulating the behaviour of the AI first and halting it if considered harmful. However, the current paradigm of computing cannot build such an algorithm.
Iyad Rahwan, Director of the Centre for Humans and Machines, said breaking the problem down to basic rules from theoretical computer science revealed that an algorithm designed to command an AI not to destroy the world could inadvertently halt its own operations.
“If this happened, you would not know whether the containment algorithm is still analysing the threat or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” said Rahwan.
Based on these calculations, the containment problem is incomputable, as no algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, researchers demonstrated that we may not even know when super-intelligent machines arrive, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.
The WA Government has invested $1 million from the WasteSorted e-Waste grants program to divert...
On 24 January, more than 20 organisations will partake in Circular Electronics Day, to inspire...
HR leaders have faced many changes in 2020, but will need to make many more decisions about work...