Resources

What do we need next in AI ethics?

Interest in the ethical implications of AI has exploded in the past few years across many circles, including academia, industry, and policy. Various codes and commitments for the ethical development and use of AI have been established, all emphasising similar things: that AI-based technologies should be used for the benefit of all humanity; that they must respect certain widely-held values such as privacy, justice and autonomy; and that it is essential we develop AI systems to be intelligible to humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. What does it really mean to say that AI systems must be ‘intelligible’, or that they should preserve ‘autonomy’? What should we do when these principles come into conflict with one another: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example? In this session, Jess will highlight some of the dilemmas we still need to face in ensuring the ethical use of AI systems in practice. She will discuss what work is needed next in AI ethics to turn principles into practice, and how those working with specific applications of AI can help.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*