Interest in the ethical implications of AI has exploded in the past few years across many circles, including academia, industry, and policy. Various codes and commitments for the ethical development and use of AI have been established, all emphasising similar things: that AI-based technologies should be used for the benefit of all humanity; that they must respect certain widely-held values such as privacy, justice and autonomy; and that it is essential we develop AI systems to be intelligible to humans. While agreeing on these principles is valuable, it’s still far from clear how we implement them in practice. What does it really mean to say that AI systems must be ‘intelligible’, or that they should preserve ‘autonomy’? What should we do when these principles come into conflict with one another: how much privacy should we be willing to sacrifice in developing life-saving technologies, for example? In this session, Jess will highlight some of the dilemmas we still need to face in ensuring the ethical use of AI systems in practice. She will discuss what work is needed next in AI ethics to turn principles into practice, and how those working with specific applications of AI can help.