18 Oct 2019

Six ways technologists and regulators could build a safer internet

How can government and industry address the growing issue of online harms? This is a subject area that is so important for technologists to be aware of in order to have the understanding and skills to avoid contributing to the problem. The blog below outlines some ideas.

1. Leverage the potential of artificial intelligence in content moderation processes

The 2019 report by Cambridge Consultants on behalf of Ofcom provides a comprehensive overview of how artificial intelligence can be integrated alongside human resources in the process of user generated content moderation. Despite the highly complex and culturally nuanced content that it is required to understand, AI has been proven to identify high risk images, videos and written media and either remove them autonomously or flag them with a human for further checking. Furthermore, it can be used to limit the exposure that human workers have to explicit content and reduce the impact on their mental health that such a job might otherwise have. It can do this by blurring the more harmful areas of images, or through a technique known as “visual question answering”.

The downside of adopting AI in content moderation processes is that it is extremely expensive for a business to produce its own set of algorithms, especially given the competition for skilled AI engineers. Development would have to be continuous as contextual understanding is key, and online content is always adapting to new media and cultural fashions. And it is unlikely that AI will become the sole form of content moderation for the foreseeable future.

2. Challenge the sacred truths of the internet

The internet has always been free to use and anonymous. Consumers are used to accessing services without any need to reveal location, age, identity and content usage. The extent of online harms would suggest that this freedom has come at a cost. Under what circumstances should society push back? Traditional law allows the state to compromise an individual’s right to privacy for reasons of law enforcement, with oversight. "Internet law" allows companies to compromise your privacy for reasons of profit, with no oversight. Should we, and how can we, rally mass public support for greater law enforcement in cyberspace when consumers are extremely protective of their online freedoms? 

3. Build an unfragmented regulatory framework

At present there is no framework or enforcement body that manages online harms. Regulation spans from the GDPR legislation that protects the use of personal data, to the Digital Economy Act 2017 requires age verification for access to online pornography. Relevant regulators include Ofcom, the Electoral Commission and Ofcom, to name a few. The UK Government’s Online Harms White Paper ambitiously proposed a new, coherent framework that covers many aspects of online harms along with the establishment of an enforcement body for holding content sharing platforms to account. 

4. Prioritise diversity in data

One of the main challenges the technology industry is facing is to ensure that the algorithms it deploys in important processes such as the moderation of online content is free of bias. If a developer uses training data that is not representative of the wider population, or if a developer unknowingly programmes in their own unconscious bias into the system, it can have a disproportionate effect on different parts of society. It is hugely important that, if the wider public is to trust the content moderation systems deployed by the large technology companies, they understand and mitigate the effect of bias in development. This can be done by, for example, using generative AI techniques to produce representative datasets for testing. 

5. Invest

A new regulatory framework needs to sit alongside a programme of financial stimulation of online safety technology development and a third party market for content moderation systems. Some out-the-box tools are already emerging, including SuperAwesome (for the protection of children’s digital privacy), Crisp (a provider of AI for content monitoring and moderation) and Yoti (safeguarding children by using machine learning to estimate age). The UK Government has committed £300,000 to fund five projects that disrupt live CSEA; five UK companies have already received £50,000 to develop and test a prototype detector of terrorist still imagery, from which the leading proposals will receive up to £500,000; and individual domestic departments (such as the National Cyber Security Centre) are running accelerators to stimulate rapid development of relevant tools. But it also feels like there is a market for a service that offers a more curated space than the current “free to use, free to abuse” open internet. YouTube have already started this process with YouTube Kids, an app created to give children a more contained YouTube environment that makes it simpler and more fun for them to explore on their own.

6. Share intelligence

There are already some companies that have developed effective systems for detecting and responding to online harms, and they have opened these shared platforms and technologies up so that they can be adopted by the wider industry. Examples are Microsoft’s PhotoDNA, a shared system for identifying child sexual abuse images, and Google’s perspective API, which uses machine learning to flag toxic content to moderators. At the invitation of the UK Government, five of the world’s largest tech companies joined a hackathon in 2018 and developed a new tool for identifying child grooming activities. Participants analysed tens of thousands of conversations to understand the patterns used by predators and fed these to engineers who developed the algorithms to automatically and accurately detect these patterns. Is there a greater role that standardisation can play, so that there is a common format for exchanging relevant information? If so, who can coordinate this activity?