Cloudless Sound Recognition
Recent years have seen artificial intelligence becoming a commercial asset through the growth of a whole new market segment of smart consumer products. In this context, AI aims to deliver human-like functionality in terms of computer speech, computer vision and music management. But what about the computer hearing in a more general sense that encompasses all sounds around us? Audio Analytic, a Cambridge-based AI company is holding the world leading position in researching, developing and commercialising cloudless sound recognition technology. The talk will outline the challenges associated with this particular AI modality. In particular, it will explain what makes it a league of its own where standard computer vision or computer speech recipes cannot simply apply, and where original IP needs to be developed in order to deliver a world leading sound recognition AI product.
Generative Adversarial Networks
It’s easy to think of deep learning as just something which chews on a load of data and makes a classification - the photo contains a dog, the audio waveform contains the world ‘hello’. But in recent years, a rapidly evolving approach called ‘Generative Adversarial Networks’ (GANs), which pits one AI against another to improve learning, has allowed deep learning to manipulate data in a completely different way. The implications of the technology are huge, from synthesising virtual worlds through to rigorous testing of AI or making it work with imperfect datasets. In this session we’ll look at how to train and use GANs, with practical examples from Cambridge Consultants' AI research lab, the Digital Greenhouse. We’ll consider the advances that have been made since GANs’ 2014 debut, where they still fall short, and where this technology could lead in the future.