CWTEC 2017: The Technologies underlying AI

CWJ reports from this year’s CW Technology and Engineering Conference in Cambridge.

There are two views which might be taken of the current hype storm associated with the term “Artificial Intelligence”.

One of them is the more hard-boiled, almost cynical one. In this view, the chip companies, having grown accustomed over the decades to building newer and more powerful silicon and thereby generating corporate value in a way not often seen, are desperate to discover reasons why people might purchase yet another generation of even more powerful machinery. Similarly, the web giants are extremely keen to discover new sources of value in the mountains of data they have accumulated.

According to this point of view, much of the current AI excitement emanates from those who stand to benefit from a general perception that useful AI is imminent. All that has actually happened is deployment of a lot of new compute power and data sets to muscle up decades-old concepts such as neural networks, with limited achievements thus far.

But there is another view. In this view, we acknowledge that, yes, concepts such as neural networks have indeed been around for a long time without becoming widely useful: but this often happens. Hero of Alexandria used external combustion to open temple doors, as a computer might be used to beat a human at chess: but steam engines, much later, went on to genuinely transform human society, letting machinery do most of the manual work.

The question is whether we are, like Hero, never to see our concept take serious form: or are we more in AI’s equivalent of the era of Watt and Trevithick, about to see our technology move from niche applications to change the world within our lifetimes?

That’s the question that CW sought to answer at this year’s CW Technology and Engineering Conference (CW TEC). This year’s CW TEC was not just focused on AI, as so many conferences and news articles are nowadays, but specifically on the technologies underlying AI: a focus which many CW members are well-equipped to appreciate. The event was hosted by Cambridge University’s Computer Lab.

The delegates were welcomed by David Paul, Director of Corporate Engineering and R&D from event sponsor Magna. Magna is a massive company ($36bn turnover) with a low profile: it makes subsystems for the car industry (“everything on the car except wheels and tyres”, as David informed us), so its deep interest in AI makes a lot of sense. The big goal, of course, is “Level 5” autonomy, meaning vehicles which require no human input at all.

“There’s a little bit of hype around Level 5 autonomy,” said David, “but it will happen.”

Summertime, and the AI cotton is high

The first speakers to take the stand were AI SIG champions Phil Claridge, “Virtual CTO” of Mandral Systems, and Peter Whale. Peter set the scene for the day by outlining just what it is that we usually mean by AI and Machine Learning (ML), considering whether AI/ML will evolve on a straight line or a hockey-stick style leap up to full-blown machine intelligence, and wondering if the current “AI summer” of heightened interest and abundant funding will persist or pass as it has before. Peter and Phil also sketched out some of the questions now being asked about the future of current AI/ML technology – will it normally be conducted in the cloud, at the edge, or in small devices?

What kind of silicon will be used to run it?

What standards and tools does one actually use to get started in AI?

Cambridge University’s Professor Steve Young started to answer these questions. Steve is also part of the team working on Siri, and he spoke about one of the main techniques underlying the current AI summer: Deep Learning. Put rather too simply, Deep Learning takes an idea that has been around for a very long time in AI research, that of neural networks, and makes them much bigger using the large amounts of compute power that have only recently become available. The much bigger neural network is then trained very, very comprehensively using huge amounts of data. To use one recent example, if you have millions or billions of pictures and you know which are pictures of cats and which are not, you can use that data to train a deep neural network to reliably tell if a picture has a cat in it.

“AI and Deep Learning have become synonymous,” Steve asserted, though he cautioned that the labelled data needed to train a deep neural net “can be expensive” – not everyone has access to such data sets, perhaps offering a clue as to why the web giants are so active in this field.

The Revolution of Depth

AI is now undergoing the 'Revolution of Depth'

- Theopane Weber, Google DeepMind

The next speaker, Theophane Weber from Google’s well-known acquisition DeepMind, was also enthusiastic about Deep Learning and the revolutionary impact that large amounts of compute power have had in the field of AI – even going so far as to speak of a “revolution of depth”. But he cautioned that there is no one single thing which makes effective AI: rather, there are many “Lego blocks”, architectural ideas which researchers or designers may assemble into a complete solution to a given problem.

Another insight was offered by Cambridge University Professor of Machine Learning Carl Rasmussen, also chairman of CW TEC sponsor PROWLER.io, who offered an alternative view. He showed a picture of the famous 1988 chess match in which IBM machine Deep Thought beat grandmaster Bent Larsen.

“Look how the computer has an extra little piece of equipment,” he said. “A human to move its pieces. Any four year old child will completely outperform any robot at this task, and the problem is the software not the hardware.

“Playing chess is easy, the difficult part is moving the pieces. Problems that look hard are not hard. And also … ‘Deep’ is not new!”

Next to speak was Dr Tony Robinson of Speechmatics, who – despite asserting that 30 years in speech research had taught him above all that one should “never do a live demo” – had a running transcript of everything he said appearing below his presentation.

Tony offered some intriguing insights into the world of speech recognition, suggesting that one should always ask what data sets have been used when told an error rate: one conversational data set, supplied originally by America’s NSA, has been in use in the field for over thirty years. Some technologies, Steve said, could be described as “somewhat over tuned” to this data set – they are excellent for interpreting it, but not very good at any other task.

The less starry-eyed viewpoint on AI certainly gained some visibility later, when Neil Lawrence, Professor of Machine Learning at Sheffield University and Senior Principal Scientist in charge of Amazon’s Machine Learning team at Cambridge, took the stand.

I'm an engineer, I solve real problems

“I’m not sure I like the term ‘AI’,” he said. “I didn’t start out to do it, I only realised I was involved when people told me that what I was doing was AI … I did an engineering degree, you see, so I’m interested in solving real problems.”

Neil went on to poke some fun at the prevalence of the word “Deep” in today’s discussion, pointing out that IBM’s original Deep Thought back in the 1980s was named after Douglas Adams’ fictional supercomputer from the Hitch-hiker’s Guide to the Galaxy.

“And that was a joke about a movie. So the term ‘Deep’ actually comes from a movie we shouldn’t talk about.”

One of the main debates of the day – and, perhaps, in the AI field at the moment – was the discussion around “transparency”. It is sometimes suggested that legal liability or regulations of the future should or could require anyone deploying AI technology to be able to explain precisely how and why the system came to any given decision, step by step. It’s often suggested that this might be needed in the case of collisions involving AI-controlled vehicles, or sometimes in other contexts: such as AIs which have appeared, in tests, to exhibit racial or gender biases.

The consensus among many of the AI experts present through the whole day was that such transparency requirements would be almost impossible to meet.

“If you applied that principle to pilots, nobody could ever fly a plane,” said Professor Rasmussen.

Neil Lawrence pointed out that the rules now being talked about for AIs have never been imposed on equipment such as steam engine governors, though these are likewise non-human automated systems directing powerful equipment.

But he could see a major difference between the technology of old and that of today: “The only reason we don’t call ordinary control systems ‘AI’ is that they work,” he said.

Overall this year’s CW TEC was an engrossing day, but the biggest question raised remains to be answered: will the current “AI summer” persist?

CWTEC speakers

Not Artificial: Back row: Peter Whale, James Chapman, Tony Robinson, Bob Driver. Middle Row: Anton Lokhmotov, David Page, Laurent Brisedoux. Front Row: Sobia Hamid, Alison Lowndes, Theopane Weber, Daniel Neil


This article was originally published in the CW Journal: Volume One, Issue Two.

If you would like to receive a free copy of the CW Journal, please submit your details here. The CW Journal Editorial Board welcomes comment from those of you who would like to submit an article - simply email your synopsis to the team.

Pg 23-25