05 Feb 2018

Shining a light into the black box

How can we ensure that AI enabled technologies are driven by diversity and inclusion?

Artificial Intelligence and machine learning will transform every aspect of human life. Underlying this transformation are algorithms and data sets which might appear to be devoid of human weaknesses, but which in reality reflect a range of social and cultural biases which have the potential to undermine the effectiveness of AI based systems.

The operations and functions performed by algorithms on data sets are carried out inside what data scientists call ‘a black box’, which means that any biases included in the data are not immediately transparent to either the coder or the user.

Here are a few examples:

  • Data driven bias: Also called garbage in, garbage out
  • Interactive bias: Microsoft's racist chatbot Tay quickly learned race hatred from Twitter
  • Emergent bias: Facebook algorithms decide what posts we want to see and exclude others
  • Similarity bias: Algorithms distort the content we see so we are inside a personalised online bubble
  • Conflicting goals bias: Systems designed for one business purpose may have hidden biases which undermine the objectives (for example potential job candidates may be put off through the use of stereotyping)
  • Linguistic bias: The meaning of a word is distilled into a series of numbers, a word vector, based on which other words it is statistically most associated with. "Female" is more closely associated with the humanities and home, whereas "male" is associated with maths and engineering.

“Algorithms impose consequences on people all the time”, says Carina Zona, a developer and tech evangelist who has challenged the hubris of computer scientists who believe that all data is equal.

“Data is not objective, it replicates flaws and our tunnel vision” – creating false positives, Zona argues. Algorithmic hubris, she says, leads to the idea that machines know better than people. Many examples of false positives or “data abuse” arise from unstated, hidden assumptions about an individual’s or a group’s behavior or characteristics, with unforeseen negative consequences.

Without the ability to adequately and effectively address the many concerns being raised around AI and machine learning, there will be growing social resistance to their application. And without full transparency and conscious setting of our objectives, the industry is in danger of storing up problems which we will at some time have to fix – if in fact it is not too late to do so.

AI learns about how the world has been. It doesn’t know how the world ought to be. That’s up to humans to decide.

What can be done to address these concerns?

A growing number of data scientists are speaking out about the need for more awareness of the dangers of bias and lack of diversity in AI based systems, and the need for greater cultural sensitivity and diversity safeguards. Aylin Caliskan, a post-doctoral researcher at Princeton University, is one of these. She believes that humans using these programs need to constantly ask, “Why am I getting these results?” and check the output of these programs for bias. They need to think hard on whether the data they are using and combining with other data is reflective of historical prejudices.

Women scientists are increasingly taking a stand against inbuilt gender and other bias in AI. The tech sector itself needs to recognize that women should get involved in analyzing the problems and developing the solutions if AI based systems are to reach their true potential.

Melinda Gates has recognised the dangers associated with unconscious bias and lack of diversity in AI and is working with Professor Fei-Fei Li of Stanford University to set up a new organisation, All4AI, to research solutions to these issues.

“As an educator, as a woman, as a woman of color, as a mother,” Fei-Fei says, “I’m increasingly worried. AI is about to make the biggest changes to humanity and we’re missing a whole generation of diverse technologists and leaders.”

The Partnership on AI (PAI) was established in late 2016 “to study and formulate best practices on AI 

technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

Its membership includes Google, Facebook, Microsoft, Apple, IBM and Amazon, and the PAI recently advertised for its founding Executive Director. The Partnership on AI has already released its own Tenets for AI Technologies based on a belief that:

“Artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.”

While these principles are certainly a step in the right direction, they are arguably still open to individual subjective interpretation and insufficiently precise to be taught to others. One person’s views of human rights might not be shared by other individuals or other cultures.

The need for standards on culture and diversity in AI

There are two related issues that need to be addressed to develop culturally diverse, transparent, trustworthy and reliable AI systems: firstly the algorithms and coding, and secondly the cultural values that we are going to impart to these systems. One way to address these concerns is through the development of clear and transparent voluntary industry standards.

Algorithms driving AI need to be certifiable and auditable. If they are reused and taken from one application to another, the underlying data sets should be traceable and open to review by researchers who can evaluate the potential bias of the data sets and the relevance and transparency of the operation and formulas being used.

To achieve this, current technical standards bodies as well as legal experts need to be involved in developing and reviewing these standards and associated workbooks and training manuals for those working on AI systems. So should community stakeholders. The development of reliable, transparent and auditable standards or industry codes could also provide a competitive advantage to the country, or region concerned, and thus stimulate international trade and development based on AI.

Above all, we need to work together to ensure that diversity and inclusion drive both the technical as well as the societal elements of AI. Industry organisations, standards bodies and technical experts need to become champions of diversity in all its forms – cultural, gender, ethnic and racial – to reduce the potential risks to the future development of AI associated with human bias.


Vicki Macleod

Expert on innovation and resilience

 

Vicki Macleod CW Journal

 

Before her work with Perfect Ltd and the GTWN, Vicki held various policy and regulatory related positions in Australia’s largest communications provider, Telstra, and represented Telstra on the OECD’s Business and Industry Advisory Committee (BIAC) in Paris. Vicki’s work now focusses on helping organisations become more resilient through innovation, to be better able to deal with disruption brought about by technological change.


Continue the conversation on Twitter. Follow us on @cwjpress and use hashtags #cwjournal and #diversity


This article was originally published in the CW Journal: Volume One, Issue Two.

If you would like to receive a free copy of the CW Journal, please submit your details here. The CW Journal Editorial Board welcomes comment from those of you who would like to submit an article - simply email your synopsis to the team.

 

Pg 26-27