Existential risks and the technological singularity

Posted by () on Dec 14 2012
VERTIC Blog >> Verification and Monitoring

David Keir, London

At VERTIC we are concerned with threats to society such as nuclear, biological, chemical and cyber security.  An article in the Sunday Times (25th Nov.) described a new project set up at Cambridge University, for the study of the proposition that super-intelligent computers could become a threat to humanity.  It says that: ‘The center for the study of existential risk — where “existential” implies a threat to humanity’s existence — is being co-launched by Lord Rees, the astronomer royal who is one of the world’s leading cosmologists. Its purpose is to study the “four greatest threats” to the human species: artificial intelligence, climate change, nuclear war and rogue biotechnology. Rees’s 2003 book, ‘Our Final Century’, had warned that the destructiveness of humanity meant our species could wipe itself out by 2100. He is launching the centre with Huw Price, the Bertrand Russell professor of philosophy at Cambridge, and Jaan Tallinn, co-founder of Skype.
 
The opening paragraph on the centre’s website says: ‘Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in artificial intelligence, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.’ 
 
Interestingly, the Sunday Times article adds to that the still-present threat of a nuclear war.
 
For the consideration of one of these, the artificial intelligence tipping point, this new Cambridge project seems to use as one of its main jumping-off points the writings of Ray Kurzweil. In ‘The Singularity is Near’, Kurzweil predicts the 'technological singularity' occurring around the year 2040.  What is envisaged is that at this point humans will build a system so powerful that it can then design and produce systems of its own, which then make the continued existence of humanity a matter of indifference to a newly-dominant machine species.
 
Historical Perspective
In 1951, Alan Turing spoke of the theoretical possibility that computing machines could become even smarter than humans, saying that: '…once the machine thinking method has started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to expect the machines to take control…'
 
This idea has been reiterated and developed in the decades since then.
 
In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The use of the term 'singularity' as a description for a tipping point in technological advance at which the outcome in society is indeterminable, was originated by the mathematician John von Neumann, in the 1950s, in his book ‘The Computer and the Brain’. Here he discussed a point in future history beyond which human can no longer be predicted to continue as the dominant and controlling species on earth. He spoke of the 'ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'
 
Moore's law (the doubling of the number of functional units in computers every two years), if it continues to hold, suggests the arrival this century (by our own efforts) of a complexity and power in computational ability which would theoretically surpass human intelligence.
 
Later, the mathematician and science fiction author Vernor Vinge predicted four ways the singularity could occur:
 
The development of computers that are 'awake' and superhumanly intelligent.
Large computer networks (and their associated users) may 'wake up' as a superhumanly intelligent entity.
Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
Biological science may find ways to improve upon the natural human intellect.'
 
It is not clear to this blogger how much of this is true and how much is dramatisation. However, assuming for now that a technological singularity is feasible, what are the real implications?
 
Friendly or hostile? - existential risk
Berglas, in 2008 noted that: '…there is no direct evolutionary motivation for an artificial intelligence to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an artificial intelligence behaving in a way not intended by its creators…' 
Another obvious issue is that human-indifferent future intelligences are likely to be less resource-costly than friendly ones. This is because of the extra burden of 'friendliness' as an additional attribute.
 
Is there really a threat - is there really a singularity?
Some would see a future where robots do almost everything we humans require as a future paradise, where work of all kinds becomes a thing of the past. However, just as many see the threat that superhumanly-intelligent systems of the futures would regard humans and their behaviour as a nuisance and as competitors for resources or, at best, become indifferent to their needs. I think all readers will have encountered versions of both these futures in science fiction stories or films.
 
Still other commentators have repudiated the artificial intelligence-driven singularity itself, and thus do not believe the threat exists. As well as the factors of societal and technological collapse due to resource exhaustion,  and the related argument of 'technology paradox', discussions about computer 'clock-rates' and circuit density and the limits of heat dissipation from chips, versus paradigmic changes in processor design,  are beyond the capabilities of the present author to make a judgement on. Suffice it to say, that the arguments are currently unresolved as to whether the predicted technological singularity will occur in this century, or at all. But if there exists even the possibility - what can, or should, done about it?
 
What safeguards have been discussed?
Asimov’ Laws of Robotics were a fictional construct, and when examined carefully do not represent any safeguard against this threat. Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence in 2000, proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real artificial intelligence would have a head start on self-improvement and, if friendly, could prevent unfriendly artificial intelligences from developing, as well as providing enormous benefits to mankind.
 
Conclusion
So what do we make of this?  Is this just a field with no utility in the real world, or is it a serious issue which humanity needs to study now and (rather like the subject of climate change in the 1980s) collect evidence of its reality, discuss and reach a consensus. Eventually, if the data proved conclusive, steps might need to be taken early to constrain the risks.
History has some lessons for us, but ultimately it will depend upon those passionate enough and knowledgeable enough to pursue these ideas, to determine whether or not studying existential risk  and in particular the putative 'singularity', is achieved on a large enough scale and early enough, to avoid a disaster of our own making. 
 

Last changed: Dec 14 2012 at 12:54 PM

Back