Singularity is not near

"What to think about machines that think," was the question that the well-known online science and technology magazine Edge asked more then 175 scientists, artists and other thinkers who are active in the field of artificial intelligence (AI).

In a world where technology — both that which we have to deal on a daily basis, and more in the background (in a sense, different manifestations of the same development) — plays an increasingly important role, that is a relevant, and even an extremely urgent question.

Especially when we see this in the light of the debate that erupted after personalities such as Stephen Hawking, Bill Gates and Elon Musk rang the alarm bell and announced that we must take immediate action if we don't want to be wiped out by ever smarter AI.

They relied on the singularity theory which states that as long as Moore's Law (which predicts that processing power doubles every two years) holds, exponentially increasing the pace of technology, making AI become more intelligent than man, and possibly even going to endanger mankind.

The singularity theory was given an extra boost when Nick Bostroms book Superintelligence (2014) became a bestseller. In this book, Bostrom extensively reports on the various stages and obstacles in the development of AI and points at the risks that singularity can bring.

Machines that think (2016), the compilation of responses received by Edge, is an effective medicine against the delusion that increasingly has come to dominate the discussion about AI. The book contains provocative questions, which often raise new and meaningful follow-up questions.

What do we mean when we are speaking of 'thinking'? Machines can perform tasks and are able to learn, but can they also think? What is the role of consciousness and emotions, and could a machine have a consciousness or experience emotions? And how could we identify that?

Steadily, the book is working towards the following viewpoints: we know too little about our own thinking, it's to early to say something meaningful about thinking machines (except that they are totally different than we think). Even machines can not think or learn something that a human did not put in at an earlier stage.

The outlined doomsday scenario of singularity brings up memories that were almost vanished from the collective memory about the end of times, or the wrath of the gods, and is even less likely. Machines want nothing by themselves. Why would they want to destroy humans?

In Machines that think, Freeman Dyson, Alison Gopnik, Nick Bostrom (yes, he also again), Brian Eno, Maria Popova, Douglas Coupland and many others are writing from their own field of knowledge about this multifaceted topic. It had been better if Mr Hawking, Gates and Musk had also limited their contribution to the discussion to their field.

Interesting links: 



Michiel Jansen-Dings