Regulating in a Digital World

by ukcivilservant

The hatred and bile hosted by Facebook and Twitter, the death of Molly Russell, Trump’s (and pre-referendum) fake news , etc. etc. have caused all right thinking liberals to crave new regulation and a new regulator.  But might they come to regret their enthusiasm?  Today’s (9 March) House of Lords report adds to the pressure for regulation, and contains much useful material, but its recommendations deserve critical scrutiny.  Here are some thoughts, in the order in which text appears in the report

The cornerstone of the report is its assertion that “… a large volume of activity occurs online which would not be tolerated offline”. (Summary)  Well … maybe it wouldn’t be tolerated in The Times or on Sky News,  but it is certainly tolerated – or at least unregulated – in private groups, in playgrounds, and in some recent newspaper headlines aimed at Brexiteer ‘traitors’ and Judicial ‘enemies of the people’.  So the recommendation in the Summary that ‘the same level of protection must be provided online as offline’ looks to me to be a serious over-simplification.

Another recommendation is that ‘[internet] services must act in the interests of users and society’.  I would hate to be a regulator tasked with enforcing that principle.

Still in the Summary, whilst I can see the case of an oversight regulator (‘the Digital Authority’) I doubt that it should have the power to ‘instruct’ other regulators what to do.  This would be a recipe for regulatory confusion.

The next para refers to network effects as though they result in an inevitable (and implicitly near permanent) ‘winner takes all’.  But this suggestion has been debunked by MIT’s Catherine Tucker who points to the once apparently inevitable domination of Microsoft and MySpace, the ease of switching between Lyft and Uber, and relative failure of Google Plus.  ( I strongly recommend her very readable articles here and here.)

There is an important discussion (paras 27-32) about the advantages of principles-based regulation over rules-based systems.  It’s all very seductive, but it ends by noting that “No form of regulation will be effective unless it is enforced.  Enforcement mechanisms must have sufficient resources and be rigorously applied.”  And that is indeed the problem. It can make sense to allow industry to find its own best way to meet a regulatory objective, having provided it with advice and guidance – but not if some in the industry are ill-disposed to regulation, and/or if non-compliance can be dangerous.  Remember Grenfell Tower.  Again, I would not like to be responsible for defending the enforcement record of an “unaccountable,  non-elected” regulator charged with enforcing their Lordship’s principles.

Much of the rest of the report contains useful material and sensible recommendations about data protection, competition law etc.  But Chapter 5 dives back into controversy as it tackles the hot topic of ways to curb bullying, online abuse, extremist content and political misinformation.  The discussion is of high quality, as one would expect from their Lordships, and they in particular say very sensible things about improving content moderation by Facebook and the rest.  They also broadly support the Carnegie Trust/Woods/Perrin ‘duty of care’ proposals under which action against online service providers “should only be in respect of systemic failures” rather than individual instances of speech.  The report also endorses the ‘Digital Authority’ recommended by Doteveryone. (Click here for more detail.)

But the report then seems to go further and faster than Perrin and Woods and leaps to the unqualified conclusion that ‘the precautionary principle’ requires ‘the remit of Ofcom [to] be expanded to include responsibility for enforcing the duty of care’.   (The principle supposedly applies because ‘the scale and risk of these issues is unproven’.)  However, the report does no more than nod at the concerns about the encroachment of free speech and general regulatory morass in which Ofcom could so easily find itself. Graham Smith has written very elegantly about this – see here and here, for instance

More particularly,  is Ofcom seriously intended to hold Twitter, Facebook, Mail Online etc. to be held to the same standards as printed media? Would Ofcom have to consider requests to ban Labour’s alleged ant-semitism as well Tommy Robinson’s alleged racism? Would a duty of care mean a refusal to republish President Trump’s many lies, or the views of anti-vaxxers?

I do like the Perrin/Woods approach, and I am convinced that we face problems that are so severe that something needs to be done to address them.  But i don’t think we can sensibly expect Ofcom to undertake this new responsibility without a lot more thought and guidance than this report appears to offer.

 

Martin Stanley

Editor, Understanding Regulation website

 

 

Advertisements