Challenges of AI: Ethics and Interpretability

Brought to you by The Artificial Intelligence Group

Jess Whittlestone and Tameem Adel, postdoctoral researchers at the Centre for the Future of Intelligence, will discuss some of the biggest practical and technical challenges in AI ethics. What important dilemmas do we face? And how can we make vague goals like ‘interpretability’ technically precise?

About the event

Interest in the ethical implications of AI has exploded in the past few years, and various codes and commitments have been established across academia, industry, and policy. These all emphasise similar things: that AI should be used for the benefit of humanity, must respect widely held values such as privacy, justice, and autonomy, and must be made interpretable for humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. 

One challenge is that widely-agreed principles come into conflict in concrete cases. It’s not clear how to resolve these tensions: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example? How do we get the benefits of data-driven personalisation without threatening important societal values like solidarity? We’ll discuss some problems with how the ethical issues surrounding AI are often talked about, and highlight some key dilemmas we need to face in turning principles into practice. 

A second challenge is that many of the goals of AI ethics are vastly underspecified. Here we’ll focus particularly on interpretability, widely considered crucial for ensuring ethical real-world deployment of intelligent systems. Part of the reason interpretability is deemed so important is that it helps us to ensure that other goals are met in the development and use of AI systems: that they are safe, reliable, and fair. The volume of research on interpretability is rapidly growing, but there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for these issues to be rigorously addressed. We’ll shed light on these issues related to interpretability as well as state-of-the-art machine learning algorithms.

Join us for a fascinating discussion at this AI SIG byte-size event at Amazon Cambridge Development Center, followed by networking and refreshments.

You can follow @CambWireless on Twitter and tweet about this event using #AIByteSize.

Speakers

Jess Whittlestone - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Jess is a research associate focused on AI policy. She is particularly interested in how we can build appropriate levels of trust in AI systems amongst policymakers and the general public, and how to avoid harmful misperceptions of the capabilities and risks of AI.

Jess has a PhD in Behavioural Science from the University of Warwick, and a first class degree in Mathematics and Philosophy from Oxford University. In her PhD, she argued that confirmation bias is not necessarily as "irrational" as it seems, with implications for how we think about the strengths and weaknesses of human reasoning. Previously, Jess worked for the Behavioural Insights Team, where she advised various government departments on improving their use of behavioural science, evidence, and evaluation methods, with a particular focus on foreign policy and security. She has also worked as a freelance journalist and has had her writing published in Aeon, Quartz, and Vox.

Tameem Adel Hesham - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Tameem Adel is a research fellow whose main research interests are machine learning and artificial intelligence, more specifically probabilistic graphical models, Bayesian learning and inference, medical applications of machine learning, deep learning and domain adaptation. He has also worked on developing transparent machine learning algorithms and on providing explanations of decisions taken by deep models.

He has obtained his PhD from University of Waterloo in 2014, advised by Prof. Ali Ghodsi. After that, he was a postdoctoral researcher at the Amsterdam Machine Learning Lab, advised by Prof. Max Welling.

SIG Champions

Laurent Brisedoux - Senior Manager, Amazon Cambridge Development Center

Laurent has been heading the Amazon R&D team in Cambridge, part of the Lab126 organization, since its creation in 2014. His group is responsible for developing software for Amazon’s consumer electronic devices such as Cloud Cam, Echo Look, Echo Show and many more innovative products to come. Prior to that, Laurent was in charge of the development and productisation of imaging technologies at Broadcom. He joined the Broadcom Mobile Multimedia group in 2004 with the acquisition of Alphamosaic, one of the Silicon Fen ’success stories’. Laurent is also a junior angel investor and working with several technology startups in the Cambridge area.

Phil Claridge - Founder, Mandrel Systems

Phil Claridge is a ‘virtual CTO’ for hire within Mandrel Systems covering end-to-end systems. Currently having fun and helping others with large-scale AI systems integration, country-wide large scale big-data processing, hands-on IoT technology (from sensor hardware design, through LoRa integration to back end systems), and advanced city information modelling. Supporting companies with M&A ‘exit readiness’, due-diligence and on advisory boards. Past roles include: CTO, Chief Architect, Labs Director, and Technical Evangelist for Geneva/Convergys (telco), Arieso/Viavi (geolocation), and Madge (networking). Phil’s early career was in electronics, and still finds it irresistible to swap from Powerpoint to a soldering iron and a compiler to produce proof-of-concepts when required.

Connect on linkedin

Gunter Haberkorn - Senior Manager Product & Technology, Magna International

Peter Whale - Founder, Vision Formers

Peter is founder of Vision Formers, a specialist consultancy that helps visionary technology businesses get product to market and turn their ideas into reality. Peter has a long track record of conceiving, developing and marketing successful technology-based solutions, deployed at scale, globally. Innovative products Peter has brought to market in digital, cloud, AI, consumer electronics and telecommunications have been used by countless millions of people on a daily basis globally, badged by the world’s leading digital and technology brands. Peter is a board member of CW (Cambridge Wireless), and co-leads its Artificial Intelligence special interest group.

Event Location

Open in google maps

Location info

Amazon Cambridge Development Center, 1 Station Square, Cambridge CB1 2GA

Email organiser

Related events

  • CW (Cambridge Wireless)

    A Tour of Orford Ness Nature Reserve Military Test Sites

    Register now
  • CW (Cambridge Wireless)

    Deep Learning in Medical Imaging

    Register now
  • CW (Cambridge Wireless)

    CW Clinics: Branding & Trademarks with EIP

    Register now

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Reserve your place

Sign in to your account

Please sign in to your CW account to reserve a place for this event. also add multiple attendees from your organisation

Join the CW network

CW is a leading and vibrant community with a rapidly expanding network of nearly 400 companies across the globe interested in the development and application of wireless and mobile technologies to solve business problems.

Start typing and press enter or the magnifying glass to search