Don’t panic!

Don’t panic!

Written by Dr Tony Milbourn, on 1 Dec 2017

This article is from the CW Journal archive.

Dystopian visions of joblessness and machine rebellion have little to do with real-world AI, says Tony Milbourn

Two hundred years ago economists and intellectuals were concerned that "The Machinery Question" was going to ruin the lives of working people.  Yet the lives of people today are better, safer, easier and possibly more enjoyable than they were then.

The fear was that what we now call the industrial revolution (the introduction of Machinery to undertake routine human tasks) was going to make people redundant, destroy livelihoods and cause fractures in society. 200 years later the overall effect of mechanisation has been to create jobs, albeit new sorts of jobs, rather than destroy them.

We can but hope that the same is going to be true for AI.

Interested in this Special Interest Group?


LEARN ABOUT THIS GROUP

AI gets Go, AI gets going

I first encountered neural nets back in the 1970s when I shared a lab with Igor Aleksander, now an Emeritus Professor at Imperial College, who was building layered neural networks, very like those that are delivering such powerful outcomes today.

Results then were limited by the depth of the networks, the speed of processing, and the training data sets. AI went into the wilderness, only to return to favour around 2012 when more powerful AI programs started to perform better than humans on tasks like the ImageNet challenge – recognising features in images. The best recent example is perhaps when AlphaGo unexpectedly beat the world champion at Go.

Pepper the Robot with tablet

Meet Pepper

Pepper is a cute robot from SoftBank, the Japanese company which now owns ARM. It gives an endearingly human face to technology by having very childlike movements and huge eyes. But Pepper's role in replacing staff in shops will be seen by some every bit as job threatening as the spinning jenny and Deepmind's AlphaGo.

We are witnessing the onset of what I suggest we call "ASAI", application-specific AI: being able to solve specific problems and solve them better than humans can. Great, so what? Is it going to drive a fundamental change in society? We can improve our ability to screen medical images, or analyse failures of mechanical equipment, or even drive cars. Important but not apocalyptic.

An AI-based security scanner can be more efficient and more consistent than the manual ones used today, where the poor operators have to look at weird images of our luggage to detect if we have a bomb hidden under our folding umbrella. I can't see this change being anything other than beneficial to travellers and operators.

There are more important and complex tasks where the AI device advises and suggests, rather than decides. Medical diagnostics for example, where signs might be picked up by the AI that can guide the doctor. Again, probably doctors and patients would welcome such an approach. Nearly half of patients have already indicated that they would prefer to deal with a "computer" than a person when discussing their ailments, presumably if they trusted the outcome.

AGI or ASAI?

We deal with AI more than we realise. Of course the most obvious place for implementation of such power-hungry computing is in web services. So all the web marketplace vendors use AI to tailor the way they offer their products to us as we navigate their site.

My view here is that some of this seems pretty "Mickey Mouse". The fact that I have bought a washing machine does not necessarily mean that I am ready to change my favourite washing powder. On the other hand perhaps it is so subtle that I haven't noticed that I am being fed offers that I find attractive and take up. All three of these examples are ASAI. They replace, support or improve the delivery of existing products and services. I doubt that they presage a fundamental change in society that we need to panic about. It looks to me like a modern version of The Machinery Question: jobs will be lost and new ones (probably better ones) will be made.

But there is more to come: Artificial General Intelligence (AGI), a non-human machine that behaves like an intelligent being. The Turing Test is well-known, but more relevant may be the Wozniak Test: the ability to enter an unknown property and independently make a cup of coffee.

AGI is way more significant than ASAI. Sufficiently so that very clever people like Stephen Hawking and Elon Musk and Prof Nick Bostrom from Oxford are concerned about superintelligence. Concerned that we are creating something that we can't control and which will out-perform and supersede us; as Hawking points out the pace of change in artificial systems is much greater than the Darwinian pace of biological systems – we won't be able to keep up. I think that they are right to be concerned, but that we have plenty of time to deal with the topic. After all whilst DeepMind's AlphaGo is extremely clever to beat Ke Jie at Go, I doubt that the Korean has a problem making a cup of coffee.

AGI will appear, the question is when? Recent progress in ASAI has been so impressive that it makes prediction difficult, but Ray Kurzweil has suggested before 2045 and many agree.

AI: As intelligent as humans … which humans exactly?

I suspect it is more difficult. Remember Eliza? Created in the late 1960s Eliza was a text-based natural language program that gave the illusion of intelligence through rules and directions that were pre-programmed, without any inherent understanding. Surprisingly, many users were convinced of Eliza's intelligence. (If you want to give it a go try http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm - it is not overwhelming.)

Might the same not be true for AGI: it will be easy to over-estimate the intelligence of the AGI; which raises something of a philosophical question – how do you judge when something is more intelligent than you are? Ask it to do things you can't do? Ask it questions you don't know the answer to?

By the time that AGI is a threat to society we'll be able to pick up our copy of The Hitchhiker's Guide to the Galaxy and take in that immortal phrase on the cover: "Don't Panic"

Dr Tony Milbourn
Corporate Strategy - u-blox UK

Tony has 30 years’ experience in the mobile communications industry and a PhD in control theory. Following a career at PA Technology and then as one of the founders of TTP, in 2000 he led the spin-out and flotation of TTP Communications plc, a major licensing business in cellular that was acquired in 2006 by Motorola. He was also a founder of ip.access, the femtocell business, and more recently led the spin-out of a soft modem start-up, Cognovo, from ARM Holdings. Cognovo was acquired by u-blox AG in 2012. u-blox is a $400m Swiss supplier of location and communications modules and chips that is focused on industrial, automotive and professional applications, particularly in the Internet of Things. For 5 years Tony drove the strategic expansion of u-blox and enabled a number of acquisitions that extended the scope and direction of the company. He is interested in creating new opportunities at the point where communications and computing converge.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*