CWIC 3 for 2

3 for 2 tickets available for CWIC 2019. *offer is available on standard price tickets only. Contact us to find out more

Just who do you think you are? Identity, anonymity and misbehaviour online

Blog published by CW (Cambridge Wireless)

The first event in the engineering trust programme was held on Thursday 7 February in the Bradfield Centre and was led by James Chapman, Laura James, Paul Morris and Tim Phipps.

It was the best of times, it was the worst of times. it was the age of wisdom, it was the age of foolishness.

So Charles Dickens opened A Tale of Two Cities, a novel set just before the French Revolution, and his words are just as applicable to the impact that digital technologies have now on 21st century society. They have brought with them the potential to democratise opinion such that everyone has the opportunity to be heard. They have brought with them immense access to information, knowledge and entertainment. They have brought with them risk, misinformation and distrust.

The ruthless anonymity of the internet inspired this first event within the Engineering Trust programme. James Chapman, CTO of MIRACL, opened activities with an overview of how identity is currently established online and potential routes to making the process more secure.

The main lesson from this talk is the fragility of authorisation and authentication processes and the need for online systems that handle personal and sensitive information to avoid a single point of failure. Challenges were highlighted within the onboarding process of online services, as well as other clear flaws, such as the fact that while onboarding may require a lengthy process of authentication (e.g. provision of personal information & documentation), the recovery process (for e.g. lost passwords) often simply requires access to an email address.

The clear rookie errors in personal online security occur when a user deploys the same password across multiple sites, or stores passwords in a password management system – which in turn is protected by a long password, that the user won’t remember and has to store physically. Yet more complicated security solutions are not flawless, as Lockheed Martin found out in 2011 when its security provider (RSA) was hacked, leading to a subsequent invasion of its own systems.  Even the public/private key system can fall apart when trust is consolidated in a single point, such as the certificate authority. This is exemplified by Google’s loss of faith in Symantec.

Recently there has been a move away from trusting a single organisation with all aspects of the authentication process to using third party providers to provide different areas of verification.  These horizontal verifiers include the likes of Open ID Connect, which allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

This move is good, but still consolidates trust in a single point. Distributing trust is better. Blockchain has received much attention as a potential source of transactional security, removing the single point of failure, e.g. certificate authorities. By spreading trust, distributed ledger technologies can provide a new source of online protection. Similarly, the SOLID programme currently being supported by Sir Tim Berners Lee seeks to flip on its head the idea of personal data distribution. Rather than spilling personal data on every site visited, the idea of SOLID is that the user keeps all their information in a personal POD and links this data to individual sites. These links are clearly reviewable by the individual and can be updated as necessary. A third approach comes from modern cryptography allowing private keys to be split – they could consist of something that the individual “knows” (e.g. a PIN) as well as something she “is” (such as a fingerprint) – and this two factor authentication of a private key can offer great improvements in security.

Ella McPherson, Lecturer in the Sociology of New Media and Digital Technology, followed James’ technology themed introduction with a human rights angled view of identity online. She is currently working on two projects at the University of Cambridge: The Whistle, which is a digital start-up for reporting and verification, and another with Amnesty International.

Ella reminisced that when social media arose, there were high hopes in the human rights community that this new technology would give a voice to those who found it hard to be heard. They offered a channel for democratising power. Yet a decade later, the world is in the age of fake-news and post-truth and it is extremely difficult for researchers and activists to sort misinformation from the truth. There is also the fact that now it is not the ability to speak that is important, but the ability to be heard – to be found. Google’s search engine has a lot of power in filtering what the majority of people end up seeing.

Ella takes the process of speaking and being heard one step further, to the point of being verified. For human rights activists, it is important to be able to ascertain that the person to whom you are speaking is indeed that person. If you can’t verify the source, then you can’t use online content as evidence.

As fake news has risen, so too has the scepticism of internet users and our readiness to believe new voices. There are several ways that individuals trust sources online, and one of them is to believe something that someone you already trust says is okay. Such a system that was on offer was the Twitter blue badge, a verification at a glance scheme that became useful for truth-hunting professions such as journalism and activism.

The blue badge system started as a tool for celebrities to confirm their identity such that fans followed the right person and it extended from there. However, until part way through the scheme there was no way to request verification, it was just something Twitter did for you. This was particularly problematic for human rights defenders in particular countries where state journalists were having their accounts verified but political opponents could not. It prevented voices of dissent from being heard, and being believed.  Stats started to emerge that demonstrated clear gender bias in the verification process. Out of active verified users in Australia, 83.3% were male and 17.7% were female.

Even when it was possible for users to apply to have their accounts verified, the process was not safe and inclusive. It is unlikely that dissidents in certain countries would be keen to share their personal details (such as a copy of a driving license) with Twitter for fear of being uncovered. Similarly, the free text boxes, Ella assumes, prefer a certain level of spelling and grammar which would automatically prefer users with more education experience.

The identity verification system installed by Twitter led to a software where you were more likely to be heard and believed, and to then have your voice amplified, if you were well-educated, in a profession or famous, male and able to share your identity without being put at risk. Unless, of course, you had a couple of thousands of pounds to buy a blue badge on the black market – which was possible.

The Twitter verification process ended when the company moved away from using the badge as an identity verification tool and deployed it with moral judgement. In November 2017 it removed the blue badge from a number of white supremacists. This censoring of content caused a backlash – understandably – and added a level of complexity to the idea of identify verification online.

The final quickfire talk of the afternoon came from Jon Roozenbeek, a postgraduate student exploring the role of (social) media, propaganda, and disinformation. He concluded with the psychologist’s approach to teaching users how to handle misinformation online: vaccination through education.

In his perspective, some misinformation that is online is for simple, un-malicious entertainment. Take for example the news site The Onion, America’s Finest News Source, which is yet to post a serious story. Most readers recognise this as humour – but there have been instances of its stories getting out of control, for example when a Chinese newspaper People’s Daily picked up and covered its piece on Kim Jong-Un being named “The Sexiest Man Alive”.

Some misinformation is mischievous trolling. However, some goes further. The example used by Joe was the story covered by News World (see image below).

 Engineering Trust - Misleading News

The video received 1.2M views, yet there was no evidence that those involved were Muslim, immigrants, or that they were attacking a Catholic church during mass. It emerged that this was a completely legal process taking place. The content was true, but the context that is was being shared in was false. Misinformation can occur through emotional language, polarisation​, conspiratorial content​, impersonation (real or fake accounts)​, trolling​, false amplification​, and manipulation. It’s important to note that deceptive does not necessarily mean fake.

Yet fake news spreads far more rapidly than true news. The Guardian reported in May 2017 that when a story was labelled as Fake News by Facebook’s software and warned users against sharing it, traffic to the article skyrocketed.

What tools are available to combat the situation? Regulation is a potential route and countries like France and Malaysia have gone that way, but it opens up a can of worms around freedom of expression. Classroom education is possible, but it relies on teachers understanding the social media habits of their students, and for groups vulnerable to fake news to have access to an effective classroom.

Technology is no good at spotting fake news – it is hard to train a programme to judge a piece of content that even humans can struggle with. Artificial intelligence to the point of grasping deception and opinion is currently not possible.

Jon’s team is currently researching an inoculation theory for fake news, employing tools such as gamification to increase its popularity and effectiveness. Through sites such as www.getbadnews.com that display fake news in a safe environment it is hoped that internet users can better come to understand the features of fake news, and therefore recognise potentially untrustworthy content in the future. Early trials are showing promising results.

The first event of the Engineering Trust programme ended with delegates splitting into four workshops each exploring a different aspect of identity online which will be explored in future blogs. The next event in the Engineering Trust programme is in planning stages - details will be released on the Programme pageshortly.

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search