Weibo

Journalists, academics, and government advisers debate how best to regulate online speech

25 June 2019

How do we prevent harms caused by online content, and should the state, or internet giants like Facebook and Twitter be primarily responsible for regulating content published on their platforms?

This was the central question addressed by the Foundation for Law, Justice and Society at a workshop at Wolfson College last week to assess in what ways online speech is and should be governed, both in China and the West.

______________________________________________________________________________________________

LISTEN NOW:

Podcast: Governance of Public Opinion in the Age of Platforms: A Study of China

Podcast: Responses to the Government White Paper on Online Harms and the ‘right to be forgotten’ 

______________________________________________________________________________________________

 

Jufang Wang, a former news editor in China and academic visitor at the BBC, opened the workshop by describing her doctoral research into the Communist Party’s efforts to control public opinion in China through its regulation of social media platforms such as Weibo, WeChat, and Toutiao – an algorithm-based news aggregator.

As the gap widens between the official pronouncements of the Party and the views and opinions expressed through social media, the state is increasingly concerned about how this can erode and destabilize its legitimacy in the eyes of its citizens. In response, platforms are regulated as online news providers, and licensed, so their ability to operate can be revoked if they do not comply with the state’s requirements, all of which is enforced through 24-hour policing of content. The problem of coping with the vast scale of online content means that Toutiao employs c.10,000 content moderators, in order to ensure their compliance with the government’s policy of ‘collaborative’ governance of free speech through the platforms.

Doctoral researcher at Oxford Pu Yan, acting as workshop commentator, questioned how far this censorship is extending, and why even non-politically-sensitive stories relating to child abuse, for example, are also being censored.

Ralph Schroeder from the Oxford Internet Institute then presented his conception of online platforms as part of a complex infrastructure controlled by algorithmic logic. He argued that the fragmented, weak civil society in China cannot mobilize as a coherent threat to the state, and that the state would not be best served by a blanket repression of social media through which civil society expresses its opinions, and through which trends in popular opinion can be identified.

The controversial ‘social credit system’ the Chinese state employs for surveillance and social management of the population is perceived in the West as Orwellian – yet Prof Schroeder cited research to indicate that Chinese citizens see it as a means to protect themselves against the unscrupulous behaviour of private companies.

The UK government’s White Paper on Online Harms, currently under public consultation, was the focus of Damian Tambini’s presentation. The LSE lecturer and government adviser acknowledged the aspiration to hold to account internet giants such as Facebook and Google for harmful content published on their platforms, while cautioning of the potential pitfalls and need to draw a careful balance so as to avoid accusations of state censorship or to impose regulatory burdens so great as to unfairly limit the number of companies able to meet them.

The White Paper proposes that a new body, possibly to be known as Ofweb, should be created to regulate social media companies and enforce a positive duty of care on the part of such companies to protect their users against harms associated with hate speech, images of self-harm, and terrorist content.

Media law researcher Roxana Radu brought the workshop to a close with her enlightening explanation of the algorithmic logic used in processing the vast quantities of personal data held by social media companies. Her central thesis was that these algorithms are highly biased, enhancing existing societal inequalities and introducing new ones.

She went on to review the specific rights that might come from new phenomena in the digital age, such as the recently introduced ‘right to be forgotten’ whereby a Spanish lawyer won a legal case against Google to have information related to a past bankruptcy removed from search results of his name, on the grounds that it was harming his right to do business.

Podcasts of the presentations will be available to download from our Podcasts pages in the next week, and Policy Briefs will be published on our Publications pages and submitted to the government's consultation on it Online Harms White Paper before it closes at the end of the week.

The workshop was the last event of this academic year, but a full programme of lectures, workshops, book colloquiua and film screenings will resume in the Autumn. To receive podcasts, policy briefs, and invitations to forthcoming events, subscribe to our bimonthly newsletter, and follow us on Twitter.