In the wake of scandal after scandal – seemingly all of them involving Facebook – pretty much everyone agrees that we should regulate big tech. A key driver of this is our habit at times to blame it for virtually every evil of modern society, be it polarisation, disinformation, radicalisation, harassment, misogyny, and more, right up to terrorism itself.
All of those problems are, in reality, much more deeply rooted than that, but the arguments that social media and the internet have compounded or accelerated each of them are real. The problem is that it’s a lot easier to agree that “something must be done” than to actually work out what that “something” should be.
We have seen previous attempts to regulate the internet not just fail but also backfire, often in annoying ways for the general public. The EU acted to give users control over how cookies are used to track our behaviour across the internet, and so better target us for adverts.
The result was that every single webpage now has an annoying cookie pop-up that needs to be dismissed to read or access whatever content you were looking for. Bombarded with dozens of these every day, all but a few of the very hardiest internet users just mindlessly click them away – meaning they haven’t addressed the tracking problem, but they have made the internet much more annoying to use.
GDPR and other data protection regulations proved similarly naff – just about every email company is operating just like it used to, data brokers still sell our information, and Facebook and Google still make record profits. But small companies and charities spent countless hours and no shortage of money making sure they complied with the new regulations, and many online sites and forms now have yet another box to tick or pop-up to dismiss to go alongside the cookie one.
There is, in short, a pretty solid track record of introducing new online laws that do nothing to tackle the actual issue they are supposed to address, while imposing costs or hassles that make the internet worse for all of us.
That’s the backdrop against which the government is introducing its Online Safety Bill, itself a rehash and expansion of an Online Harms white paper introduced under Theresa May, which provoked much concern from a wide range of rights groups at the time of its introduction. The stakes for damage to the internet are higher with this bill, given its ambitions to tackle larger-scale problems than previous legislation – it is trying to tackle everything from age verification for adult content, to retaining communications records of terror suspects, to creating a “duty of care” for online companies for their users and requiring them to undergo much greater moderation of posts.
These might sound like very welcome goals to most of us, but they are goals that come with the risks of severe side-effects. Age verification for adult content comes with the risk of creating some form of register of who has accessed which sites – which would be any online blackmailer’s dream. Advocates argue security and privacy protections can be built in, but these have been bypassed before – and even if they were entirely perfect, scammers would almost certainly still use the existence of the new age verification requirements in phishing attempts.
Creating a “duty of care” is similarly a great-sounding policy that quickly raises serious questions. Faced with potentially punitive fines for not regulating content appropriately, large companies generally respond by over-moderating – erring so much on the side of caution that they ban all sorts of borderline speech, most of it totally legal and much of it unproblematic.
In a world where the internet is where most of us are free to express ourselves – like it or not, politics and debate happens online now – that could easily result in curbs on free expression of the British public (not US tech bosses) as a second-order effect of government rules. That could be a major imposition on the human rights of UK internet users, under the guide of regulating the platforms they use.
Even trying to get tech companies to do more to track possible suspect conversations in the interests of preventing terror or organised crime is more problematic than it first appears. Private communications on the internet rely on end-to-end encryption – it’s what secures WhatsApp chats, Signal chats, and other messaging apps, as well as online banking, shopping and more.
Putting in a back door so the government could access chats after the fact feels like it should be a no-brainer, but in practice, there is no way to create a back door only governments could use. Once you build in vulnerabilities, there is the potential for anyone to exploit them.
The security analogy here would be the police insisting we all left our front doors unlocked so officers could more easily access our homes if needed. The privacy analogy would be imagining a microphone at each table in a pub or a café, recording the conversations held just in case they needed to be accessed by authorities later – and saying this was only a breach of privacy for those people whose communications were actually accessed.
All of this is before the government’s nonsensical last-minute inclusions into the bill, such as a mooted proposal to make instigating a Twitter “pile-on” – where a high-profile users encourages their followers to criticise or abuse someone else – a criminal offence. Trying to hold an individual responsible for the actions of others is an interesting possible offence, even before you consider the difficulties of distinguishing whether a pile-on was intentional, or even formally defining what constitutes one.
None of this is to say that the regulatory environment we have at the moment is the right one, or that we have the perfect balance between our rights, our security, our privacy and our safety – we almost certainly don’t.
But it is intended to highlight the serious trade-offs at play and so the importance that any new regulation of online behaviour is very, very carefully scrutinised and considered. That seems sadly unlikely to happen.
Astonishingly given his former role as a human rights lawyer, Keir Starmer appeared to commit Labour to backing the Online Safety Bill provided the government worked to rush it through quickly, sacrificing an important issue for a headline that didn’t even lead the news agenda for a single day.
Starmer should reconsider this position, just as the government’s backbenchers – many of whom have defined themselves as liberals or libertarians through the era of Covid restrictions, not only for the reasons above but also for the ambitious ministerial power-grab contained in this legislation.
Given the bill will grant new and sweeping powers to set the rules on online speech and how it’s managed, it needs strong protections against political interference in setting those rules – or it could easily be used by this government (or a future authoritarian government) to tweak rules in line with its own agenda, whether battling the “culture wars” or rejecting wokeism.
The bill does put Ofcom in charge of those positions – a huge expansion to its already broad existing role – but gives ministers the power to set Ofcom’s “strategic priorities”, decide what counts as “priority content” for moderation and regulation, “direct Ofcom” to modify its codes and guide Ofcom on how to exercise its powers.
That is not something that should be passed by parliament in a rush or on the nod. The reality of regulating tech and managing the numerous real world problems tied into it shouldn’t be wrapped up into one magic bill.
The government should try to tackle one at a time, with properly thought-through measures – or else we will just keep making the internet worse, and then repeating this cycle.