Challenges of AI: Ethics and Interpretability

Brought to you by The Artificial Intelligence Group

Jess Whittlestone and Tameem Adel, postdoctoral researchers at the Centre for the Future of Intelligence, will discuss some of the biggest practical and technical challenges in AI ethics. What important dilemmas do we face? And how can we make vague goals like ‘interpretability’ technically precise?

Registration for this event is now closed.

About the event

Interest in the ethical implications of AI has exploded in the past few years, and various codes and commitments have been established across academia, industry, and policy. These all emphasise similar things: that AI should be used for the benefit of humanity, must respect widely held values such as privacy, justice, and autonomy, and must be made interpretable for humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. 

One challenge is that widely-agreed principles come into conflict in concrete cases. It’s not clear how to resolve these tensions: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example? How do we get the benefits of data-driven personalisation without threatening important societal values like solidarity? We’ll discuss some problems with how the ethical issues surrounding AI are often talked about, and highlight some key dilemmas we need to face in turning principles into practice. 

A second challenge is that many of the goals of AI ethics are vastly underspecified. Here we’ll focus particularly on interpretability, widely considered crucial for ensuring ethical real-world deployment of intelligent systems. Part of the reason interpretability is deemed so important is that it helps us to ensure that other goals are met in the development and use of AI systems: that they are safe, reliable, and fair. The volume of research on interpretability is rapidly growing, but there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for these issues to be rigorously addressed. We’ll shed light on these issues related to interpretability as well as state-of-the-art machine learning algorithms.

Join us for a fascinating discussion at this AI SIG byte-size event at Amazon Cambridge Development Center, followed by networking and refreshments.

You can follow @CambWireless on Twitter and tweet about this event using #AIByteSize.

Sponsored by Magna International

Leading global automotive supplier.

View profile

Agenda

Expand all

The information supplied below may be subject to change before the event.

17:30

Registration

18:00

CW welcome and introduction to Artificial Intelligence SIG, Bob Driver, CEO, CW

18:05

Welcome from event sponsor, David Paul, Director for Business Development, Magna International

18:15

‘Interpretability in Machine Learning’: Tameem Adel Hesham, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Interpretability is often considered crucial for enabling effective real-world deployment of intelligent systems. Unlike performance measures such as accuracy, objective measurement criteria for interpretability are difficult to identify. The volume of research on interpretability is rapidly growing. However, there is still little consensus on what interpretability is, how to measure and evaluate it, and how to control it. There is an urgent need for most of these issues to be rigorously defined and activated. One of the taxonomies of interpretability in ML includes global and local interpretability algorithms. The former aims at getting a general understanding of how the system is working as a whole, and at knowing what patterns are present in the data. On the other hand, local interpretability provides an explanation of a particular prediction or decision. Here, we shed light on issues related to interpretability, as well as state-of-the-art machine learning algorithms.

18:35

'What do we need next in AI ethics?': Jess Whittlestone, Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Interest in the ethical implications of AI has exploded in the past few years across many circles, including academia, industry, and policy. Various codes and commitments for the ethical development and use of AI have been established, all emphasising similar things: that AI-based technologies should be used for the benefit of all humanity; that they must respect certain widely-held values such as privacy, justice and autonomy; and that it is essential we develop AI systems to be intelligible to humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. What does it really mean to say that AI systems must be ‘intelligible’, or that they should preserve ‘autonomy’? What should we do when these principles come into conflict with one another: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example?
In this session, Jess will highlight some of the dilemmas we still need to face in ensuring the ethical use of AI systems in practice. She will discuss what work is needed next in AI ethics to turn principles into practice, and how those working with specific applications of AI can help.

18:55

Q & A

19:10

Wrap-up by Bob Driver, CEO, CW (Cambridge Wireless)

19:15

End of session followed by networking

Beer, pizza and soft drinks kindly provided by Magna International

20:30

Event closes

Speakers

Tameem Adel Hesham - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Tameem Adel is a research fellow whose main research interests are machine learning and artificial intelligence, more specifically probabilistic graphical models, Bayesian learning and inference, medical applications of machine learning, deep learning and domain adaptation. He has also worked on developing transparent machine learning algorithms and on providing explanations of decisions taken by deep models.

He has obtained his PhD from University of Waterloo in 2014, advised by Prof. Ali Ghodsi. After that, he was a postdoctoral researcher at the Amsterdam Machine Learning Lab, advised by Prof. Max Welling.

Jess Whittlestone - Postdoctoral Researcher, Leverhulme Centre for the Future of Intelligence

Jess is a research associate focused on AI policy. She is particularly interested in how we can build appropriate levels of trust in AI systems amongst policymakers and the general public, and how to avoid harmful misperceptions of the capabilities and risks of AI.

Jess has a PhD in Behavioural Science from the University of Warwick, and a first class degree in Mathematics and Philosophy from Oxford University. In her PhD, she argued that confirmation bias is not necessarily as "irrational" as it seems, with implications for how we think about the strengths and weaknesses of human reasoning. Previously, Jess worked for the Behavioural Insights Team, where she advised various government departments on improving their use of behavioural science, evidence, and evaluation methods, with a particular focus on foreign policy and security. She has also worked as a freelance journalist and has had her writing published in Aeon, Quartz, and Vox.

SIG Champions

Darendra Appanah - Senior Test Engineer, Cambridge Consultants

Darendra is part of the Systems Engineering & Test department, where he applies his expertise in Digital Security Testing to various technologies, including Machine Learning and AI. He has a background in Software Test Automation of wireless communications protocols, such as satellite communications and LTE. He enjoys defining and developing testing strategies for cybersecurity challenges and ensuring the quality and reliability of innovative solutions.

Maria Axente - Head of AI Public Policy & Ethics, PwC UK

Maria is a globally recognised, award-winning AI ethics public policy expert, a member of various Advisory Boards - UK All-Party Parliamentary Group on AI (APPG AI), NATO EDT & SKEMA AI Institute, and Chair of techUK Data and AI leadership committee. In her current role as Head of AI Public Policy and Ethics, she aligns PwC's AI strategies with ethical considerations and regulatory trends, fostering collaboration with external stakeholders and leading PwC's responses to public policy consultations and initiatives. Maria's commitment to responsible AI has made her a recognised thought leader and influencer in the field. Maria is a passionate advocate for children's rights in the age of AI, serving as a member of the Advisory Board for UNICEF #AI4Children and World Economic Forum Generation AI programmes. She also serves as an Intellectual Forum Senior Research Associate at the University of Cambridge researching human-centric AI and the intersection between tech policy and ethics.

Phil Claridge - Founder, Mandrel Systems

Phil Claridge is a ‘virtual CTO’ for hire within Mandrel Systems covering end-to-end systems. Currently having fun and helping others with large-scale AI systems integration, country-wide large scale big-data processing, hands-on IoT technology (from sensor hardware design, through LoRa integration to back end systems), and advanced city information modelling. Supporting companies with M&A ‘exit readiness’, due-diligence and on advisory boards. Past roles include: CTO, Chief Architect, Labs Director, and Technical Evangelist for Geneva/Convergys (telco), Arieso/Viavi (geolocation), and Madge (networking). Phil’s early career was in electronics, and still finds it irresistible to swap from Powerpoint to a soldering iron and a compiler to produce proof-of-concepts when required.

Parminder Lally - Partner, Appleyard Lees IP LLP

Parminder is a patent attorney based in Appleyard Lees’ Cambridge office, and helps companies to protect their technological innovations. She has built a substantial reputation working with high-growth start-ups, spin-outs and SMEs in Cambridge, and has in-house experience. She specialises in writing and prosecuting patent applications for computer-implemented inventions. Her work includes patenting AI-based technologies, including new machine learning frameworks and applications of machine learning in image classification, human-computer interactions and text-to-speech. Parminder also writes her own AI blog on LinkedIn, and is a member of the Chartered Institute of Patent Attorneys’ Computer Technology Committee.

Simon Thompson - Head of AI, ML & Data Science, GFT Financial Ltd

Simon leads a team that develops AI and ML solutions for large financial institutions. Before joining GFT he was the Principal Investigator for BT’s AI program. Before that he was the Head of Practice for Big Data and Customer Experience at BT and BT’s lead for collaborations with MIT, and the first industry fellow at the Alan Turing Institute. Simon is interested in practical application of AI technology and the practice and process of AI and ML projects. His book “Managing Machine Learning Projects” was published by Manning Books in 2023.

Peter Whale - Founder & CEO, Vision Formers

Peter is Founder & CEO of Vision Formers, the specialist consultancy that supports and mentors leaders of visionary technology businesses get product to market and turn ideas into reality.

Vision Formers works with start-ups and scale-ups, providing significant expertise in accelerating business growth through a focus on developing a robust product strategy, growing and coaching product and development teams, and providing operational excellence. Peter has a long track record of conceiving, developing and marketing successful technology-based solutions, deployed at scale, globally. Innovative products Peter has brought to market in digital, cloud, AI, consumer electronics and telecommunications have been used by countless millions of people on a daily basis globally, badged by the world’s leading digital and technology brands.

Peter also works with Digital Catapult as Programme Manager for UKTIN, working with partners and stakeholders to deliver UKTIN’s mission to transform the UK telecoms innovation ecosystem, capitalising on the country’s strengths in technology, academia, and entrepreneurialism, while positioning it for growth as new opportunities emerge in the industry.

Peter is a board member of CW (Cambridge Wireless), a Fellow of the IET, a Chartered Engineer, and a member of the Association of Business Mentors.

Event Location

Open in google maps

Location info

Amazon Cambridge Development Center, 1 Station Square, Cambridge CB1 2GA

Email organiser

Related resources

Related events

  • Cambridge Wireless

    Delights & Disasters of RF Measurements

    Register now
  • Cambridge Wireless

    Backhauling the Rural Mobile Broadband Service

    Register now
  • Cambridge Wireless

    Risk, perception, management and mitigation in RF Safety

    Register now

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Reserve your place

Join the CW network

CW is a leading and vibrant community with a rapidly expanding network of nearly 400 companies across the globe interested in the development and application of wireless and mobile technologies to solve business problems.

Sign in to your account

Please sign in to your CW account to reserve a place at this event and to qualify for any member discounts.

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*