- Lifestyle & Sports
- 19 Jul 21
In the age of surveillance capitalism, giving your driver’s licence to big tech could be the new normal. Is this the dystopian future we’re destined for, or a harsh reminder of the bigotry we’ve yet to tackle?
Last week brought the stark reality of online abuse into sharp focus for those of us privileged enough to escape the wrath of a powerful few behind a keyboard.
The European Championship Final should have been a joyous occasion. Instead, when England lost the game, many fans turned on the three players they deemed responsible for the loss; Marcus Rashford, Jadon Sancho, and Bukayo Saka - who received a slew of racist messages online (not for the first time).
Priti Patel, despite previously criticising anti-racist gestures made by the English team, tweeted that there was “no place” in the UK for racist abuse. Even Boris Johnson made a statement in support of the players. The hypocritical statements from these leaders simply pay lip service to the problem. Patel and Johnson have faced no real accountability when it comes to the racist remarks they have made in the past. If there’s no consequences for them, why should the average person feel they’re doing anything wrong?
The world has been given a window into what it means to be Black online, and it’s re-invigorated longstanding debates about how social media platforms can and should tackle online harassment. One of the more extreme solutions that’s gathering popularity is requiring users to submit photo ID when they set up a social media account.
In recent months, Irish Government and opposition TDs have called for the implementation of such a rule. In the UK, Katie Price started a petition for the cause, which elicited a response from the UK government who said the solution “may disproportionately impact vulnerable users and interfere with freedom of expression”.
Advertisement
The general idea is this: when you sign up for any social media account, be that Twitter, Facebook, or even TikTok, you will be legally required to submit photo ID of yourself.
By doing this, your account is inextricably connected to your identity. If your account is seen to be harassing people, you can be permanently banned from the platform and will be unable to start a new account.
“I think it’s a bad solution,” says Dr Francesca Sobande, a lecturer in digital media studies in Cardiff University.
Dr Sobande feels that ultimately, a regulation like this could be harmful to the very people it’s trying to protect.
“When we think about different groups of people who are most likely to be monitored or policed, or whose existence in society is considered ‘deviant’ because of issues to do with racism, sexism, ableism, xenophobia, transphobia, and we think about the different forms of surveillance that already exists, the question comes to mind, how will this ID info potentially be used to contribute to the further surveillance of those people who are most discriminated against?”
Whistle-blowers, activists, political refugees, and domestic violence survivors are among those who often depend on anonymity online. Forcing them to tie their identity to their account could put them in danger.
“There’s this idea that efforts like this will only be used to address online abuse,” she says, “but the track records of big tech, and their pursuit of profit suggests otherwise.”
Advertisement
Social media consultant and host of the Geek Out podcast, Matt Navarra agrees with her.
“We’ve already had this with Russia and other less trustworthy governments and bodies around the world, who are likely to want to get their hands on that kind of information,” explains Navarra.
“If everyone’s ID and information is held by a social network, it’s just ripe for somebody to try and hack in and take those details, and potentially identify people who were previously anonymous.”
“There’s a question of practicality and feasibility here as well,” Navarra adds. “Big platforms are definitely going to push back on this.”
“Platforms like Facebook who have two or three billion users – can you imagine the logistical exercise they’d have to go through to collate, store, and manage every single user’s ID?”
From a financial perspective, measures like these may not be popular with platforms either, as the solution entails high costs in time, resources, data storage, and greater data security.
“On top of that, they’ll recognise that lots of people won’t be comfortable providing their photo ID to a big private company, and so, that will limit their growth potential for new users. This will massively hit their bottom line in terms of ad revenue, because there are fewer users and fewer new users,” says Navarra.
Advertisement
Other industry experts feel that profit is at the core of this issue, and that big platforms are actively disincentivised to tackle this problem.
“Racism generates user engagement. That increases ad revenue by longer user sessions on the platform. It doesn’t even matter if the user likes or dislikes on the racist content, so long as it keeps them clicking and commenting,” Christopher Wylie, author, Canadian data consultant, and Cambridge Analytica whistle-blower explained in a recent tweet.
Navarra feels differently – he thinks the political pressure facing these massively profitable, influential platforms is enough to make them want to change.
“Facebook, Twitter, any of those big platforms will not enjoy being the centre of attention when it comes to discussions around online harassment. There’s nothing to be gained from being in that position,” he explains.
Navarra concedes that there is some reluctance to make stricter rules and regulations on this issue.
“It does seem that no one wants to take responsibility. Governments don’t want to make the rules and be seen to be dictating to citizens what they can and can’t do or say online, and the platforms equally don’t feel it’s their job to be, as they call it, ‘the arbiters of truth,’” he says.
Twitter made a statement in February of this year saying that there was “no room for racist behaviour” on the platform.
Advertisement
At that point, 11 million tweets had been made by people in the UK about the Championship. Twitter claimed to have deleted over 5,000 of those for content violations. After England lost the Euros, Twitter deleted over 1,000 tweets and suspended a number of accounts.
The question remains: surely, social media platforms should be proactively preventing online harassment, not just deleting posts after the fact? Regular folk have to consistently report harassment and abuse to the apps, which rarely places strict consequences on the perpetrator's shoulders. Studies have proven that Black people and People of Colour on Instagram face significantly more shadow bans and punishments for inexplicable reasons, while racial abuse is handled with a lighter hand.
“The problem is, the tech they’re using, which is a combination of machine learning, natural language processing, and artificial intelligence (AI), isn’t sophisticated enough to be 100% accurate 100% of the time,” says Navarra, who feels sure that platforms would “love to have tech” that was.
“The nuances of languages around the world, and the context it’s posted within, and a whole host of very subtle factors to do with language, make it very tricky for these pieces of tech to identify if somebody is harassing or abusing somebody or whether it’s in jest, or in the context of someone describing something or asking a question,” Navarra explains.
The inherent bias of tech like AI is something to be considered here also. AI algorithms are often created by white, able-bodied, heterosexual, cisgendered men. This inevitably imbues these algorithms with bias, and many have been shown to be racist.
A prime example of this came in 2017, when Deborah Raji, a 21-year-old Black woman from Ottawa, who was working in Clarifai, a start-up company who built technology that could automatically recognise objects in digital images and planned to sell it to businesses, police departments and government agencies, discovered that over 80% of the faces the company used to train its facial recognition software, were white and the majority were male. The people choosing the training data were mostly white men who did not realise their data was biased.
Any effort to create AI to tackle this problem will have to have people of colour, trans people, disabled people, and queer people at its centre if it’s going to be effective.
Advertisement
Defining what is and isn’t hate speech has proved tricky territory also. The meaning of ‘free speech’ is often warped by members of the alt-right, or people who are racist, anti-LGBTQ, or anti-immigrant who see it as a matter of ‘free speech’ to be able to spread hatred about these groups.
Debates have swirled around the UK’s flagship ‘Online Safety Bill’ and how it delineates free speech from hate speech. Gaby Hinsliff in a recent Guardian article points out the complexities of regulating things like this, and how dependent these definitions are on your (potentially bigoted) beliefs: “to say that biological sex is real, and immutable would be seen in some circles as transphobic hate speech, and in others as a perfectly reasonable statement of fact.”
Simply increasing the capacity of social media platforms to punish this kind of behaviour is by no means the end of the story. There’s a root cause to this problem, and it’s one we must face.
“Big tech should do more, but this isn’t just about big tech,” says Dr Sobande. “Racism isn’t something that’s specific to social media.
“It’s systemic, it’s structural, and it’s something that needs to be tackled accordingly in society – any work to address online racism, needs to be part of broader and sustained work to address racism, period.”
It’s important to remember that for a lot of people, last week’s outpouring is not news.
“What we’re seeing directed at these football players is part of the daily lives of many black people who participate in different digital spaces, and Black women face the intersection of sexism and racism.
Advertisement
“This is an important moment to reckon with the reality of what it means to be Black and visible online,” explains Dr Sobande.
“I’m pleased to see more of these conversations happening right now, but it’s also very frustrating when I know just how many people have been doing work to try and address these issues, and hold different institutions and public figures accountable, and how, it takes something such as this for questions to be raised and politicians to make statements, when we know how many Black people have been facing this abuse on a daily basis for many, many years,” she says.
It seems then that the first, and most essential, step towards ending racism online is to recognise that it is an extension of offline racism, it’s more visible because more people can see a Tweet then can witness in-person harassment, but it’s not a unique form of discrimination.
As Navarra points out, “racism wasn’t introduced when Facebook was launched.”