The UK government's newly released Online Harms whitepaper sets out a vision for a more tightly regulated internet. It calls for the establishment of an independent regulator tasked with forming a 'duty of care' framework for online content sharing companies - social media, search engines, video or blogging platforms - and empowered to enforce this with the threat of hefty fines, the penalisation of company executives and even by instructing ISPs to block websites deemed unacceptably 'harmful'.
The scope of the body's target industries is matched by the huge diversity of 'online harms' it would police. These span from the pernicious - cyberbullying, spreading fake news or disinformation and trolling - to the illegal: revenge pornography, hate crimes, harassment, sale of illegal goods, spreading terrorist content, and child sex abuse materials.
Read next: The UK's answer to 'break up big tech'
The main targets of this increased regulation are likely to be the all-encompassing platforms who have come to dominate every sector of public and private discourse. Indeed, the report explicitly states that platforms posing the greatest threat in terms of either scale or severity will be the first targets.
"The era of self-regulation for online companies is over," said Digital, Culture, Media and Sport Secretary Jeremy Wright, while laying out the proposals. "Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough."
But how will this legislation be applied in practice? The whitepaper has been critiqued on the vagueness of its proposals by bodies such as TechUK, which said: "With many key questions still open for consultation, there is still a long way to go to achieve the government's ambition of creating a world-leading framework to combat online harms." This is in part due to the fact that more specific proposals will be developed by the independent body once it has been established, but means at present it is hard to quantify the likely effects of the legislation.
Examples of proposals provided by the paper include fact-checking services on platforms, curbing the reach of content that has been marked as false, promoting authoritative news sources, and improving the transparency of political advertising, but the vast scope of the materials destined to fall under the new body's remit will provide a challenge.
The approach to clamping down on cyberbullying, for example, or communities that share harmful content of the kind implicated in the suicide of 14-year-old Molly Russell will be vastly different to how it addresses spreading disinformation or the sharing of child abuse content by paedophiles. These activities take place in different corners of the internet and are propagated by very different mechanisms.
There is the chance that proposals that currently appear radical will be diluted by the time they are formalised into a coherent framework. The paper states that there must be a balance between the UK's desire to be "the safest place in the world to go online" and the drive to be the "best place to build a digital company".
Is regulation the future?
Some have warned that these calls could stifle innovation in the tech sector, but targeting based on scale may help to sidestep this issue.
These areas are already dominated by a few companies at most, such as Google in search, while Facebook and the Facebook-owned Instagram along with Snapchat and Twitter cover much of the most popular social media platforms. Google's YouTube has the lion's share of the video-sharing market.
It's unlikely that startups will bear the brunt of the new regulatory force, although those in the content-sharing space might begin to consider how to bake preventative mechanisms into their product from the outset.
These recommendations, if fully realised in tangible policy initiatives, would situate the UK as a pioneer of internet safety. This move has followed long-bubbling indignation over the vast power of internet giants that control the most consequential platforms online. In recent months, there have been calls from US senator Elizabeth Warren to break up big tech; a UK whitepaper calling for the increased regulation of tech giants; European legislation on content restriction; proposals for limiting porn access online; and now this.
Signs point towards a watershed moment where regulation is realised on a grand scale. The question is, will there be casualties in this fight for a 'safer' internet?
Though instinct suggests that tech giants will recoil from the prospect of increased regulatory oversight, Facebook has expressed receptiveness. Last week, Mark Zuckerberg published an op-ed in the Washington Post advocating that "the internet needs new rules". In it, he details four areas to bring in greater legislation, one of which is "harmful content". He called for more rigid guidelines from government bodies on what constitutes harmful content and the imposition of minimum acceptable levels, as well as internet companies to publish reports about how effectively they're dealing with harmful content. (Facebook and Twitter already publish such reports.) Many of these recommendations are also echoed in the whitepaper.
Zuckerberg's increasing desperation to portray Facebook as a benevolent giant means that he may attempt to make a show of cooperating with any of the regulatory calls from the new body. In France, civil servants have been permitted to be embedded in Facebook for six months to observe how the company handles and responds to harmful content hosted on the site. However, if the legislation is to be truly radical then it must push tech giants out of their comfort zone
Heads of UK policy for Twitter and Facebook both made positive-sounding statements, although both were careful to stress, respectively, the importance of "working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet" and "supporting innovation, the digital economy and freedom of speech".
Does this indicate a general move towards a more highly regulated internet in the coming decades? And what does this mean for the average internet user? There has been pushback to the whitepaper from libertarian advocates of an open internet including the neoliberal think tank, the Adam Smith Institute. Matthew Lesh, its head of research, said: "The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press."
Freedom of speech campaigners Article 19 also oppose any 'duty of care' onus placed on internet platforms. They say the government "must not create an environment that encourages the censorship of legitimate expression". While it's hard to argue that child abuse and terrorist materials should fall under 'freedom of speech' protections, there are cases where lines are blurrier. These concerns may be fuelled by the fact that Facebook, in attempting to monitor its own platform, has removed legitimate but fringe political publications such as Venezuela Analysis and Telesur English, without providing any specific reason why. Incidents like these demonstrate the need for a more refined approach to filtering out harmful content.