Should we panic about the coming of AI?

Should we panic about the coming of AI?

Written by John Haine, on 1 Dec 2017

This article is from the CW Journal archive.

Reverse Turing Test: We need to understand human intelligence first


Norbert Wiener, pivotal in the history of information technology, invented the term cybernetics in the title of his 19481 book. It contained the prophetic line: "the man who has nothing but his physical power to sell, will have nothing to sell which it is worth anyone's money to buy." Could it be today that we should replace that word physical with mental, and would that be a future we want? Is it indeed a possible future?

Vicki MacLeod's thoughtful and important piece in this issue discusses the way that unconscious cultural norms and biases in an AI system's design or training may affect and disadvantage members of different ethnic and social groups. The clear implication is the need to ensure such norms and biases are made explicit and removed so that "everyone is equal in the Mind's eye".

Over the past half-century or so we have seen increasing recognition that society can apply many implicit norms that discriminate against various groups in a variety of ways. On the back of that has grown a trend to "legislate" against such discrimination, for example by actual law, or codified good practice. The result is to try to make people stop discriminating by making them aware of their biases and / or by making it socially uncomfortable, or even illegal, to apply them. Many people call this "political correctness".

Against that is another human trait. There are those who fulminate against refugees entering Europe in search of a life of any sort after their homes and families have been ravaged by war. Others though have the insight to recognise and empathise with fellow human beings and this leads them to offer welcome and support.

Correctness or Empathy?

We might characterise these two approaches as "political correctness" and "empathy". Making sure we recognise and correct the "learning" biases we program into our systems analogous to political correctness.

The question is, can we build in insight and empathy? This crystallises a key question in AI: are there human mental processes that a machine even in principle cannot implement?

Whether running on the quad core processor in your smartphone, a server farm hosting part of the "cloud", or an array of IBM "True North" neural processors, an AI system is at base a program running on a Turing Machine. As such it is a formal system subject to mathematical constraints. Roger Penrose and his followers point out2 that there are statements that a mathematician can see are true but, even in principle, undecidable by such a system. Mathematicians are rare but they are human, and if there is just one mental trait which cannot be implemented in a machine then there is a clear distinction between AI and "HI".

If insight is a trait that cannot be implemented in AI, then we would have to depend on making our systems "politically correct", and that would mean identifying and eliminating all the implicit and explicit biases and norms that may affect the way that they interact with people.



Reverse Turing test

Let's make this more concrete in a sort of "reverse Turing Test". You are in hospital, about to undergo the most delicate eye surgery using the latest robotic apparatus. This could be controlled by a human expert; or by an artificial "mind" hosted in the Cloud. You could expect the human to understand how you are feeling, to be keyed up by the need to preserve your life and improve your vision even while cutting into your organs: in short, to empathise with you, and you might feel (I do!) that such empathy is key to his calling and indeed your willingness to undergo the operation. Could you feel the same about the Cloud Mind?

We assume that AI will play all kinds of roles in our future society where it supplements or replaces human intelligence. But do we even understand what human intelligence, and consciousness, is; and can we be sure that there are not crucial aspects of it that no machine can ever emulate? These are key questions that need answering and society needs to have those answers to control how AI is deployed, before we commit ourselves to it.

John Haine
Visiting Professor - University of Bristol (Communication Systems & Networks Research Group)

John Haine has spent his career in the electronics and communications industry, working for large corporations and with four Cambridge start-ups. His technical background includes R&D in radio circuitry and microwave circuit theory; and the design of novel radio systems for cordless telephony, mobile data, fixed wireless access and IoT communications. He has led standardisation activities in mobile data and FWA in ETSI, and contributed to WiMax in IEEE. At various times he has been involved in and led fund-raising and M&A activities. In 1999 he joined TTP Communications working on research, technology strategy and M&A; and after the company’s acquisition by Motorola became Director of Technology Strategy in Motorola Mobile Devices. After leaving Motorola he was CTO Enterprise Systems with ip.access, a manufacturer of GSM picocells and 3G femtocells. In early 2010 he joined Cognovo, which was acquired by u-blox AG in 2012. He led u-blox' involvement in 3GPP NB-IoT standardisation and the company's initial development of the first modules for trials and demonstrations. Now retired from u-blox he is an Honorary Professor in Electronic and Electrical Engineering at Bristol University, where he chairs the SWAN Prosperity Partnership Project external advisory board . He was founder chair and is Board Member Emeritus of the IoT Security Foundation. He served on the CW Board chaired the Editorial Board of the CW Journal.  John has a first degree from Birmingham (1971) and a PhD from Leeds (1977) universities, and is a Life Member of the IEEE.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*