Opinion: Should we panic about the coming of AI?

Reverse Turing Test: We need to understand human intelligence first

Norbert Wiener, pivotal in the history of information technology, invented the term cybernetics in the title of his 19481 book . It contained the prophetic line: “the man who has nothing but his physical power to sell, will have nothing to sell which it is worth anyone’s money to buy.” Could it be today that we should replace that word physical with mental, and would that be a future we want? Is it indeed a possible future?

Vicki MacLeod’s thoughtful and important piece in this issue discusses the way that unconscious cultural norms and biases in an AI system’s design or training may affect and disadvantage members of different ethnic and social groups. The clear implication is the need to ensure such norms and biases are made explicit and removed so that “everyone is equal in the Mind’s eye”.

Over the past half-century or so we have seen increasing recognition that society can apply many implicit norms that discriminate against various groups in a variety of ways. On the back of that has grown a trend to “legislate” against such discrimination, for example by actual law, or codified good practice. The result is to try to make people stop discriminating by making them aware of their biases and / or by making it socially uncomfortable, or even illegal, to apply them. Many people call this “political correctness”.

Against that is another human trait. There are those who fulminate against refugees entering Europe in search of a life of any sort after their homes and families have been ravaged by war. Others though have the insight to recognise and empathise with fellow human beings and this leads them to offer welcome and support.

Correctness or Empathy?

We might characterise these two approaches as “political correctness” and “empathy”. Making sure we recognise and correct the “learning” biases we program into our systems analogous to political correctness. The question is, can we build in insight and empathy? This crystallises a key question in AI: are there human mental processes that a machine even in principle cannot implement?

Whether running on the quad core processor in your smartphone, a server farm hosting part of the “cloud”, or an array of IBM “True North” neural processors, an AI system is at base a program running on a Turing Machine. As such it is a formal system subject to mathematical constraints. Roger Penrose and his followers point out2 that there are statements that a mathematician can see are true but, even in principle, undecidable by such a system. Mathematicians are rare but they are human, and if there is just one mental trait which cannot be implemented in a machine then there is a clear distinction between AI and “HI”. If insight is a trait that cannot be implemented in AI, then we would have to depend on making our systems “politically correct”, and that would mean identifying and eliminating all the implicit and explicit biases and norms that may affect the way that they interact with people.

Reverse Turing Test

Let’s make this more concrete in a sort of “reverse Turing Test”. You are in hospital, about to undergo the most delicate eye surgery using the latest robotic apparatus. This could be controlled by a human expert; or by an artificial “mind” hosted in the Cloud. You could expect the human to understand how you are feeling, to be keyed up by the need to preserve your life and improve your vision even while cutting into your organs: in short, to empathise with you, and you might feel (I do!) that such empathy is key to his calling and indeed your willingness to undergo the operation. Could you feel the same about the Cloud Mind?

We assume that AI will play all kinds of roles in our future society where it supplements or replaces human intelligence. But do we even understand what human intelligence, and consciousness, is; and can we be sure that there are not crucial aspects of it that no machine can ever emulate? These are key questions that need answering and society needs to have those answers to control how AI is deployed, before we commit ourselves to it.

Professor John Haine

John Haine CW Journal profile

1Cybernetics: Or Control And Communicatinos In The Animal And Machine; Norbert Wiener; MIT Press 1948/1961. 2Shadows of the Mind: A Search for the Missing Science of Conciousness; Roger Penrose; Oxford University Press 1989. See also https://penroseinstitute.com/


This article was originally published in the CW Journal: Volume One, Issue Two.

If you would like to receive a free copy of the CW Journal, please submit your details here. The CW Journal Editorial Board welcomes comment from those of you who would like to submit an article - simply email your synopsis to the team.

Pg 5