Challenges of AI: Ethics and Interpretability

Brought to you by The Artificial Intelligence Group

Jess Whittlestone and Tameem Adel, postdoctoral researchers at the Centre for the Future of Intelligence, will discuss some of the biggest practical and technical challenges in AI ethics. What important dilemmas do we face? And how can we make vague goals like ‘interpretability’ technically precise?

About the event

Interest in the ethical implications of AI has exploded in the past few years, and various codes and commitments have been established across academia, industry, and policy. These all emphasise similar things: that AI should be used for the benefit of humanity, must respect widely held values such as privacy, justice, and autonomy, and must be made interpretable for humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. 

One challenge is that widely-agreed principles come into conflict in concrete cases. It’s not clear how to resolve these tensions: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example? How do we get the benefits of data-driven personalisation without threatening important societal values like solidarity? We’ll discuss some problems with how the ethical issues surrounding AI are often talked about, and highlight some key dilemmas we need to face in turning principles into practice. 

A second challenge is that many of the goals of AI ethics are vastly underspecified. Here we’ll focus particularly on interpretability, widely considered crucial for ensuring ethical real-world deployment of intelligent systems. Part of the reason interpretability is deemed so important is that it helps us to ensure that other goals are met in the development and use of AI systems: that they are safe, reliable, and fair. The volume of research on interpretability is rapidly growing, but there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for these issues to be rigorously addressed. We’ll shed light on these issues related to interpretability as well as state-of-the-art machine learning algorithms.

Join us for a fascinating discussion at this AI SIG byte-size event at Amazon Cambridge Development Center, followed by networking and refreshments.

You can follow @CambWireless on Twitter and tweet about this event using #AIByteSize.

Agenda

Expand all

The information supplied below may be subject to change before the event.

17:30

Registration

18:00

CW welcome and introduction to Artificial Intelligence SIG, Bob Driver, CEO, CW

18:05

Welcome from event sponsor, David Paul, Director for Business Development, Magna International

18:15

‘Interpretability in Machine Learning’: Tameem Adel Hesham, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing. However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision. Here, we shed light on issues related to interpretability, as well as state-of-the-art machine learning algorithms.

18:35

'What do we need next in AI ethics?': Jess Whittlestone, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Interest in the ethical implications of AI has exploded in the past few years across many circles, including academia, industry, and policy. Various codes and commitments for the ethical development and use of AI have been established, all emphasising similar things: that AI-based technologies should be used for the benefit of all humanity; that they must respect certain widely-held values such as privacy, justice and autonomy; and that it is essential we develop AI systems to be intelligible to humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. What does it really mean to say that AI systems must be ‘intelligible’, or that they should preserve ‘autonomy’? What should we do when these principles come into conflict with one another: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example?
In this session, Jess will highlight some of the dilemmas we still need to face in ensuring the ethical use of AI systems in practice. She will discuss what work is needed next in AI ethics to turn principles into practice, and how those working with specific applications of AI can help.

18:55

Q & A

19:10

Wrap-up by Bob Driver, CEO, CW (Cambridge Wireless)

19:15

End of session followed by networking

Beer, pizza and soft drinks kindly provided by Magna International

20:30

Event closes

Speakers

Tameem Adel Hesham - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Tameem Adel is a research fellow whose main research interests are machine learning and artificial intelligence, more specifically probabilistic graphical models, Bayesian learning and inference, medical applications of machine learning, deep learning and domain adaptation. He has also worked on developing transparent machine learning algorithms and on providing explanations of decisions taken by deep models.

He has obtained his PhD from University of Waterloo in 2014, advised by Prof. Ali Ghodsi. After that, he was a postdoctoral researcher at the Amsterdam Machine Learning Lab, advised by Prof. Max Welling.

Jess Whittlestone - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Jess is a research associate focused on AI policy. She is particularly interested in how we can build appropriate levels of trust in AI systems amongst policymakers and the general public, and how to avoid harmful misperceptions of the capabilities and risks of AI.

Jess has a PhD in Behavioural Science from the University of Warwick, and a first class degree in Mathematics and Philosophy from Oxford University. In her PhD, she argued that confirmation bias is not necessarily as "irrational" as it seems, with implications for how we think about the strengths and weaknesses of human reasoning. Previously, Jess worked for the Behavioural Insights Team, where she advised various government departments on improving their use of behavioural science, evidence, and evaluation methods, with a particular focus on foreign policy and security. She has also worked as a freelance journalist and has had her writing published in Aeon, Quartz, and Vox.

SIG Champions

Laurent Brisedoux - Senior Manager, Amazon Cambridge Development Center

Laurent has been heading the Amazon R&D team in Cambridge, part of the Lab126 organization, since its creation in 2014. His group is responsible for developing software for Amazon’s consumer electronic devices such as Cloud Cam, Echo Look, Echo Show and many more innovative products to come. Prior to that, Laurent was in charge of the development and productisation of imaging technologies at Broadcom. He joined the Broadcom Mobile Multimedia group in 2004 with the acquisition of Alphamosaic, one of the Silicon Fen ’success stories’. Laurent is also a junior angel investor and working with several technology startups in the Cambridge area.

Phil Claridge - Founder, Mandrel Systems

Phil Claridge is a ‘virtual CTO’ for hire within Mandrel Systems covering end-to-end systems. Currently having fun and helping others with large-scale AI systems integration, country-wide large scale big-data processing, hands-on IoT technology (from sensor hardware design, through LoRa integration to back end systems), and advanced city information modelling. Supporting companies with M&A ‘exit readiness’, due-diligence and on advisory boards. Past roles include: CTO, Chief Architect, Labs Director, and Technical Evangelist for Geneva/Convergys (telco), Arieso/Viavi (geolocation), and Madge (networking). Phil’s early career was in electronics, and still finds it irresistible to swap from Powerpoint to a soldering iron and a compiler to produce proof-of-concepts when required.

Connect on linkedin

Gunter Haberkorn - Senior Manager Product & Technology, Magna International

Gunter is working at the Corporate R&D Department and has years of experience in complete vehicle engineering. He is looking into Product and Technology trends in terms of automotive industry with special interest in autonomy, mobility solutions, manufacturing and electrification and especially technologies which can support advancements in these areas.

Vaiva Kalnikaite - Director, Dovetailed

Vicky Schneider - Senior Scientific Program Manager, Amazon Cambridge Development Center

Currently Senior Scientific Program Manager at the Amazon Development Centre in Cambridge, UK. Vicky works in the Machine Learning Team and is involved in a variety of academic engagements around ML and AI, including outreach, education and training initiatives. Previously Deputy Director of EMBL Australian Bioinformatics Resource (www.embl-abr.org.au) and Associate Professor at the University of Melbourne. Vicky remains an Honorary Principal Fellow at the University of Melbourne and is an academic visitor at the University of Cambridge where she collaborated with the Bioinformatics Training Team.  Before Australia, Vicky was at the Earlham Institute (Norwich), as part of the Senior Management Team and Head of Training and Outreach which evolved to Head of Scientific Training, Education and Learning Division. In the previous years Vicky has been responsible for the strategic coordination and implementation of the EMBL-EBI’s User-Training programme which has pass now ten years and continues to provide training for the scientific users of the EMBL-EBI’s data services. Prior to joining the EMBL-EBI in 2007 Vicky held an Assistant Professor position at the University of Bern and the Institute for Aquatic Sciences (EAWAG) and postdocs at the University of Zurich and University of Rome (Torvergata). Vicky studied biology at the University of Rome and obtained her PhD on the evolution of sex at the University of Leiden (NL) and Lyon (France). Vicky has been extensively involved in the acquisition, management and implementation of funded research and training projects throughout her career.

Peter Whale - Founder, Vision Formers

Peter is founder of Vision Formers, a specialist consultancy that helps visionary technology businesses get product to market and turn their ideas into reality. Peter has a long track record of conceiving, developing and marketing successful technology-based solutions, deployed at scale, globally. Innovative products Peter has brought to market in digital, cloud, AI, consumer electronics and telecommunications have been used by countless millions of people on a daily basis globally, badged by the world’s leading digital and technology brands. Peter is a board member of CW (Cambridge Wireless), and co-leads its Artificial Intelligence special interest group.

Event Location

Open in google maps

Location info

Amazon Cambridge Development Center, 1 Station Square, Cambridge CB1 2GA

Email organiser

Related resources

Related events

  • CW (Cambridge Wireless)

    Hey, Employees Are Users Too!

    Register now
  • CW (Cambridge Wireless)

    TableCrowd: January Dinner

    Register now
  • CW (Cambridge Wireless)

    Assembling the A-Team

    Register now

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Reserve your place

Sign in to your account

Please sign in to your CW account to reserve a place for this event. also add multiple attendees from your organisation

Join the CW network

CW is a leading and vibrant community with a rapidly expanding network of nearly 400 companies across the globe interested in the development and application of wireless and mobile technologies to solve business problems.

Start typing and press enter or the magnifying glass to search