The UK’s leading broadcasters and ISPs have called for the government to introduce independent regulatory oversight of social media content.
The group of media and broadband operators in the tightly regulated industries spans both the state-funded and commercial sector — with the letter to the Sunday Telegraph being inked with signatures from the leaders of the BBC, ITV, Channel 4, Sky, BT and TalkTalk.
They argue there’s an “urgent” need for independent oversight of social media, and counter suggestions that such a move would amount to censorship by pointing out that tech companies are already making choices about what to allow (or not) on their platforms.
They are argue independent oversight is necessary to ensure “accountability and transparency” over those decisions, writing: “There is an urgent need for independent scrutiny of the decisions taken, and greater transparency. This is not about censoring the internet, it is about making the most popular internet platforms safer, by ensuring there is accountability and transparency over the decisions these private companies are already taking.”
“We do not think it is realistic or appropriate to expect internet and social media companies to make all the judgment calls about what content is and is not acceptable, without any independent oversight,” they add.
Calls for regulation of social media platforms have been growing from multiple quarters and countries, and politicians clearly feel there is political capital to spend here. (Indeed, Trump’s latest online punchbag is Google.)
Yet policymakers the world over face the challenge of how to regulate platforms that have become so popular and therefore so powerful. (Germany legislated to regulate social media firms over hate speech takedowns last year but it’s in the vanguard of government action.)
The UK government has made a series of proposals around Internet safety in recent years, and the media telco group argues this is a “golden opportunity” to act against what they describe as “all potential online harms” — further suggesting that “many of which are exacerbated by social media”.
The government is working on a white paper on Internet safety, and the Telegraph says potential interventions currently under private debate include the creation of a body along the lines of the UK’s Advertising Standards Authority (which reports to Ofcom), which it says could oversee Facebook, Google and Twitter to decide whether to remove material in response to complaints from users.
The newspaper adds that it is envisaged by proponents of this idea that such a regime would be voluntary but backed with the threat of a legislative crackdown if the online environment does not improve. (The EU has been taking this approach with hate speech takedowns.)
Commenting on the group’s letter, a government spokesperson told the Telegraph: “We have been clear that more needs to be done to tackle online harms. We are committed to further legislation.”
For their part, tech platforms claim they are platforms not publishers.
Yet their algorithms indisputably create hierarchies of information — which they also distribute at vast scale. At the same time they operate their own systems of community standards and content rules, which they enforce (typically imperfectly and inconsistently), via after-the-fact moderation.
The cracks in this facade are very evident — whether it’s a high profile failure such as the Kremlin-backed mass manipulation of Facebook’s platform or this smaller scale but no less telling individual moderation failure. There are very clearly severe limitations to the self-regulation the companies typically enjoy.
Meanwhile, the impacts of bad content decisions and moderation failures are increasingly visible — as a consequence of the the vast scale of (especially) Facebook and Google’s YouTube.
In the UK, a parliamentary committee which has been probing the impact of social media amplified disinformation on democracy recently recommended a third category be created to regulate tech giants that’s not necessarily either a platform or a publisher but which tightens their liabilities.
The committee’s first report, following a long and drama-packed enquiry this year (thanks to the Cambridge Analytica Facebook data misuse scandal), also called for social media firms to be taxed to pay for major investment in the UK’s data protection watchdog so it is better resourced to be able to police data-related malfeasance.
The committee also suggested there should be an education levy also raised off social media firms to pay for the digital literacy skills necessary for citizens to navigate all the stuff being amplified by their platforms.
In their letter to the Sunday Telegraph the group emphasizes their own investment in the UK, whether in the form of tax payments, original content creation or high-speed broadband infrastructure.
Whereas U.S. tech giants stand accused of making lower contributions to national coffers as a result of how they structure their businesses.
The typical tech firm response to tax-related critiques is to say they always pay the tax that is due. But technical compliance with the intricacies of tax law will do nothing to alleviate the reputational damage they could suffer if their businesses become widely perceived as leaching off (rather than contributing to) the nation state.
And that’s the political lever the media firms and ISPs look to be seeking to pull here.
We’ve reached out to Facebook, Twitter and Google for comment.