Shining a light into the black box

Shining a light into the black box

Written by Vicki MacLeod, on 1 Dec 2017

This article is from the CW Journal archive.

How can we ensure that AI enabled technologies are driven by diversity and inclusion?

Artificial Intelligence and machine learning will transform every aspect of human life. Underlying this transformation are algorithms and data sets which might appear to be devoid of human weaknesses, but which in reality reflect a range of social and cultural biases which have the potential to undermine the effectiveness of AI based systems.

The operations and functions performed by algorithms on data sets are carried out inside what data scientists call 'a black box', which means that any biases included in the data are not immediately transparent to either the coder or the user.

Here are a few examples.

  • Data driven bias: Also called garbage in, garbage out.
  • Interactive bias: Microsoft's racist chatbot Tay quickly learned race hatred from Twitter.
  • Emergent bias: Facebook algorithms decide what posts we want to see and exclude others.
  • Similarity bias: Algorithms distort the content we see so we are inside a personalised online bubble.
  • Conflicting goals bias: Systems designed for one business purpose may have hidden biases which undermine the objectives (for example potential job candidates may be put off through the use of stereotypes).
  • Linguistic bias: The meaning of a word is distilled into a series of numbers, a word vector, based on which other words it is statistically most associated with. "Female" is more closely associated with the humanities and home, whereas "male" is associated with maths and engineering.

"Algorithms impose consequences on people all the time", says Carina Zona, a developer and tech evangelist who has challenged the hubris of computer scientists who believe that all data is equal.

"Data is not objective, it replicates flaws and our tunnel vision" – creating false positives, Zona argues. Algorithmic hubris, she says, leads to the idea that machines know better than people. Many examples of false positives or "data abuse" arise from unstated, hidden assumptions about an individual's or a group's behaviour or characteristics, with unforeseen negative consequences.

Without the ability to adequately and effectively address the many concerns being raised around AI and machine learning, there will be growing social resistance to their application. And without full transparency and conscious setting of our objectives, the industry is in danger of storing up problems which we will at some time have to fix – if in fact it is not too late to do so.

AI learns about how the world has been. It doesn't know how the world ought to be. That's up to humans to decide.

Recent AI diversity failures

Pikachu

As discussed at the CW Inclusive Innovation conference on 19 September

Crime prediction AI was racist
A Minority Report style AI was designed in 2016 by a company called Northpointe, intended to predict which alleged criminals would reoffend. Black offenders were much more likely to be judged high-risk by the software.

Microsoft's infamous chatbot, 'Tay'
The chatbot Tay.ai was launched on Twitter early in 2016 in an attempt to reach out to young people. Originally intended to resemble a normal, pleasant teenage girl, just 24 hours of exposure to Twitter turned Tay into a "Hitler-loving, feminist-bashing troll".

The racist AI beauty contest
Launched in 2016 by deep learning group Youth Laboratories, Beauty.ai was intended to select "the First Beauty Queen or King Judged by Robots." Of 44 winners only one was dark-skinned, a far lower proportion than among the entrants.

Pokémon Go favoured white neighbourhoods
It wasn't long after the hugely popular Pokémon Go game launched that users noticed that there were fewer PokéStops and Gyms (locations in the game) to be found in real-world black neighbourhoods. This turned out to be a problem with the data, drawn from a previous game played mostly by better-off whites.

What can be done to address these concerns?

A growing number of data scientists are speaking out about the need for more awareness of the dangers of bias and lack of diversity in AI based systems, and the need for greater cultural sensitivity and diversity safeguards. Aylin Caliskan, a post-doctoral researcher at Princeton University, is one of these. She believes that humans using these programs need to constantly ask, "Why am I getting these results?" and check the output of these programs for bias. They need to think hard on whether the data they are using and combining with other data is reflective of historical prejudices.

Women scientists are increasingly taking a stand against inbuilt gender and other bias in AI. The tech sector itself needs to recognize that women should get involved in analyzing the problems and developing the solutions if AI based systems are to reach their true potential.
Melinda Gates has recognised the dangers associated with unconscious bias and lack of diversity in AI and is working with Professor Fei-Fei Li of Stanford University to set up a new organisation, All4AI, to research solutions to these issues.

"As an educator, as a woman, as a woman of color, as a mother," Fei-Fei says, "I'm increasingly worried. AI is about to make the biggest changes to humanity and we're missing a whole generation of diverse technologists and leaders."

The Partnership on AI (PAI) was established in late 2016 "to study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society."

Its membership includes Google, Facebook, Microsoft, Apple, IBM and Amazon, and the PAI recently advertised for its founding Executive Director. The Partnership on AI has already released its own Tenets for AI Technologies based on a belief that:

"Artificial intelligence technologies hold great promise for raising the quality of people's lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education."

While these principles are certainly a step in the right direction, they are arguably still open to individual subjective interpretation and insufficiently precise to be taught to others. One person's views of human rights might not be shared by other individuals or other cultures.

MALICIOUS MIRROR Microsoft suspended the Twitter Account of Tay - an AI bot built to sound out the millennials - after it started sending racist messages.
The need for standards on culture and diversity in AI

There are two related issues that need to be addressed to develop culturally diverse, transparent, trustworthy and reliable AI systems: firstly the algorithms and coding, and secondly the cultural values that we are going to impart to these systems. One way to address these concerns is through the development of clear and transparent voluntary industry standards.

Algorithms driving AI need to be certifiable and auditable. If they are reused and taken from one application to another, the underlying data sets should be traceable and open to review by researchers who can evaluate the potential bias of the data sets and the relevance and transparency of the operation and formulas being used.

To achieve this, current technical standards bodies as well as legal experts need to be involved in developing and reviewing these standards and associated workbooks and training manuals for those working on AI systems. So should community stakeholders. The development of reliable, transparent and auditable standards or industry codes could also provide a competitive advantage to the country, or region concerned, and thus stimulate international trade and development based on AI.

Above all, we need to work together to ensure that diversity and inclusion drive both the technical as well as the societal elements of AI. Industry organisations, standards bodies and technical experts need to become champions of diversity in all its forms – cultural, gender, ethnic and racial – to reduce the potential risks to the future development of AI associated with human bias.

GET INVOLVED WITH THE CW JOURNAL & OTHER CW ACTIVITIES


BECOME A MEMBER

Vicki MacLeod
Secretary General - GTWN

Before her work with Perfect Ltd and the GTWN, Vicki held various policy and regulatory related positions in Australia's largest communications provider, Telstra, and represented Telstra on the OECD's Business and Industry Advisory Committiee (BIAC) in Paris. Vicki's work now focusses on helping organisations become more resilient through innovation, to be better able to deal with distrption brought about by technological change.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*