NATIONAL NEWS:- Tech companies signed up to the Christchurch Call

35

Editor’s note: The opinions in this article are the author’s, as published by our content partner, and do not necessarily represent the views of MSN or Microsoft.

Instead of using what amounts to censorship, tech companies signed up to the Christchurch Call would be wise to adopt a more preventative tactic, writes the University of Otago’s Alistair Knott 

[smartslider3 slider=3]

We have heard a lot recently from the world’s tech giants about what they are doing to implement the pledge they signed up to in the Christchurch Call. But one recent announcement may signal a particularly interesting development. As reported in the New Zealand Herald, the world’s social media giants ‘agreed to join forces to research how their business models can lead to radicalisation’. This marks an interesting change from a reactive approach to online extremism, to a preventative approach.

Until now, the tech companies’ focus has been on improving their methods for identifying video footage of terrorist attacks when it is uploaded, or as soon as possible afterwards. To this end, Facebook has improved its AI algorithm for automatically classifying video content, to make it better at recognising (and then blocking, or removing) footage of live shooting events. The algorithm in question is a classifier, which learns through a training process. In this case, the ‘training items’ are videos, showing a mixture of real shootings and other miscellaneous events.

The Christchurch Call basically commits tech companies to implementing some form of Internet censorship. The methods adopted so far have been quite heavy-handed: they either involve preventing content being uploaded, or removing content already online, or blocking content in user search queries. Such moves are always closely scrutinised by digital freedom advocates. Companies looking for ways to adhere to the Christchurch pledge are strongly incentivised to find methods that avoid heavy-handed censorship.

In this connection, it is interesting to consider another classifier used by Facebook and other social media companies, which sits at the very centre of their operation. This is a classifier that decides what items users see in their feed. This classifier is called a recommender system. It is trained to predict which items users are most likely to click on.

There is some evidence that recommender systems have a destabilising effect on currents of public opinion. This is because the training data for a recommender system is its users’ current clicking preferences. The problem is that recommender systems also influence these preferences, because the items they predict to be most clickable are also prioritised in users’ feeds. Their predictions are in this sense a self-fulfilling prophecy, amplifying and exaggerating any preferences detected by users.

This effect may cause recommender systems to polarise public opinion, by leading users to extremist positions. As is well-known, people have a small tendency to prefer items that are controversial, scandalous or outrageous – not because they are extremists, but just because it’s human nature to be curious about such things. This small tendency can be amplified by recommender systems. Obviously, social media systems aren’t responsible by themselves for extremism. But there’s evidence they push in this direction. A recent study from Brazil is particularly convincing, showing that Brazilian YouTube users consistently migrate from milder to more extreme political content, and that the recommender algorithm supports this migration.

Tech companies certainly don’t design their recommender systems to encourage extremism. The systems are simply designed to maximise the amount of time users spend viewing content from their own site – and thus to maximise profits from their advertisers. A tech company’s recommender system is a core part of its business model. This is why it’s so interesting to hear reports, for the first time, that social media companies are beginning to question whether their ‘business model’ can lead to extremism.

It’s conceivable that very small changes in recommender algorithms could counteract their subtle effects in tilting public opinion towards extremism. Any such changes would still be a form of ‘Internet censorship’. But they are a very light touch. There is no question of deleting material from the Internet, or preventing uploads, or blocking users’ search requests. In fact, there is no denial of user requests at all, since recommender systems already deliver content unbidden into users’ social media feeds. Recommender systems are already making choices on behalf of users. But at present, these choices are driven purely by tech companies’ drive to maximise profits. What’s being contemplated are subtle changes to these systems, that take into account the public good, alongside profits.

As well as being less heavy-handed in censorship terms, these changes also have a preventative flavour, rather than a reactive one. Rather than waiting for terrorist incidents and then responding, the proposed changes act pre-emptively, to diffuse the currents that lead to extremism. They are very appealing from this perspective too.

The question of how recommender algorithms could be modified to defuse extremism is an important one for debate, both within tech companies, and in the public at large. The tech companies are best placed to run experiments with different versions of the recommender system and observe their effects. (They routinely do this already.) The public should have a role in discussing what sorts of extremism should be counteracted. (There’s presumably no harm in being an extreme Star Wars fan.) The crucial thing is to begin a discussion between the tech companies and the public they claim to serve. We hope we are seeing the beginnings of this discussion in the recent announcement.

Source - msn
- Advertisement - [smartslider3 slider=4]