Winning in AI’s Cambrian explosion

Winning in AI’s Cambrian explosion

Written by CW Journal Team, on 1 Dec 2017

This article is from the CW Journal archive.

With tools, platforms and hardware proliferating wildly, careful choice is needed

AI R&D at the moment could be likened to the 'Cambrian Explosion' of the era 541 million years ago, in which life on Earth suddenly proliferated to an astonishing extent, probably driven by the appearance of much more oxygen in the atmosphere. Life then rapidly became much more sophisticated and diverse, and the same is true today with the new lifeforms and ecosystems of AI. The pace of change is intense. Application designers struggle to make sense out of what AI approaches (models, algorithms), software (AI frameworks, compute libraries) and hardware can satisfy their requirements. There are a bewildering number of possible trade-offs and constraints on the cost and time to market.

At the same time, hardware vendors see tantalising business opportunities to achieve 10-100x improvements over today's state-of-the-art by specialising to the emerging AI workloads, but find themselves at a loss since the workload requirements change much more rapidly than the traditional hardware design cycle allows.

Keeping up with the rate of AI innovation, therefore, calls for a highly agile system approach that engages the entire community in a virtuous co-design and optimisation cycle, one in which the design of AI applications is informed by hardware capabilities and the design of hardware is informed by the AI applications which will make use of its capabilities.

Speeding towards Level 5 cars, blinking our eyes open very briefly every four seconds

To give a concrete example, the automotive industry is racing towards providing autonomous driving capabilities by a self-imposed deadline of 2020. Researchers in machine vision and machine learning are being lured in their hundreds to join well-funded R&D labs. Most of them, however, only have experience of developing algorithms that work on powerful workstations or in the cloud. They primarily focus on the accuracy of predictions. For example, the most accurate algorithm for detecting cars, cyclists and pedestrians on the KITTI Vision Benchmark Suite (www.cvlibs.net/datasets/kitti/eval_object.php) processes a single image in 4 seconds on a high-end GPU. Clearly, such an approach is not suitable for deployment in cars, on resource- and power-constrained embedded systems with no guaranteed connectivity to offload computations to the cloud.

Hal 2001

Strange new lifeforms

Before the Cambrian Explosion, most life was simple and single celled. Before AI, most computers were simple too.

Conversely, state-of-the-art algorithms that process tens to hundreds of images per second may not meet functional safety requirements: for example, they may be unacceptably likely not to recognise pedestrians in low-light conditions. Furthermore, these algorithms still require powerful hardware that consumes hundreds of watts of electricity and costs a couple of thousand dollars.

Even if this is acceptable for self-driving car prototypes, it is prohibitively expensive and power-hungry for mass deployment in ordinary cars of today which typically have less than a kilowatt of electrical power available, or EVs of the future which will need to maximise battery life and husband every watt-hour to this end: even heating and air-conditioning are troublesome loads for EVs, so high-powered computing systems for the directing AI will clearly be unacceptable.

Ultimately, the automotive industry wants compute platforms (software/hardware) that consume perhaps tens of watts of power and cost perhaps tens of dollars while still meeting recognised safety standards. They turn to the semiconductor industry for solutions, but find themselves gobsmacked by the huge variety of choices in the fast-moving AI/software/hardware landscape, the lack of dependable benchmarking and hence of verifiable benchmarking data.

Interested in this Special Interest Group?


LEARN ABOUT THIS GROUP

This benchmark is on a bench from the past

The semiconductor industry is at the crossroads too. AI represents a longed-for opportunity to sell a new generation of hardware but this is not a simple matter. With potentially colossal business opportunities and intense competition appearing on all sides, the stakes are at their highest. Even the big, global established players risk losing it all to disruptive upstarts. Traditional design methodologies based on optimising for a small number of arbitrarily selected benchmarks are at a breaking point. Markets crave solutions optimised for emerging workloads like AI and Machine Learning. Those players who understand that the rules of the game have profoundly changed and are able to adapt will thrive; others who have not will fail.

It should be clear by now that to win, the workload designers and the system designers need to talk. In fact, they need to do much more than talk. Collaboration should be supported by an ecosystem of tools for seamless AI/software/hardware co-design and optimisation, enabling knowledge sharing and artefact reuse.

Visionaries like Gill Pratt heading the Toyota Research Institute seem to instinctively get this. They argue that we should all embrace "co-opetition" via customisable open-source software and hardware, and eschew the "not-invented-here" syndrome.

Many others say similar things but do not back up the vision with words. As we heard at this year's CW TEC, focusing on AI:
"Lots of people, despite what they say, are hanging on to their secret sauce and keeping it to themselves." Dr Peter Baldwin, Founder, Myrtle Software, CW TEC 2017.

It is becoming clear that collaboration is not just a bullet point in an idealist's manifesto. It’s a pragmatist's approach that is sensitive to talent scarcity, costs and time pressures. Only a community of like-minded individuals and organisations with appropriate collaboration technologies in place can move the needle on fulfilling the promises of AI in an effective (and fun) way.

Authors

Dr Anton Lokhmotov
CEO & Co-Founder, dividiti

Anton Lokhmotov

Dr Anton Lokhmotov is CEO and co-founder of startup dividiti, contributing to Collective Knowledge (CK), an open technology, platform and initiative for accelerating AI R&D by crowdsourcing interdisciplinary design and optimisation knowledge. In 2010-2015, Dr Lokhmotov led development of GPU Compute programming technologies for the ARM Mali GPUs, including production and research compilers, libraries and performance analysis tools.

Dr Grigori Fursin
CTO & Co-Founder, dividiti

Grigori FursinDr Grigori Fursin is CTO and co-founder of dividiti, as well as Chief Scientist and founder of the non-profit cTuning foundation, which is behind the Collective Knowledge technology. In 2007-2015, Dr Fursin worked as a tenured research scientist at INRIA. In 2010-2011, Dr Fursin helped establish the Intel Exascale Lab in France, as Head of Program Optimization and Characterization group.

CW Journal Team
Editorial - CW Journal

To contact the CW Journal editorial team, please email them here.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*