Grappling with Online Safety Legislation: How to Hold the Platforms Accountable

When it comes to online safety—or its flip side, online harms—many countries are grappling with the problem. What is the role of government in establishing guidelines and regulations for the protection of citizens, particularly vulnerable segments of the population, from a range of harms perpetrated by anti-social and even criminal elements via the internet? What is the role of “internet intermediaries”, the internet distribution and social media platforms that the perpetrators use to attack their victims? Are the platforms simply innocent third parties whose services have been hijacked, like phone lines being used by fraudsters to scam victims, or do they play a more active role? Should they be expected to be as much a part of the solution as part of the problem? And if they are to be expected to play a role in controlling harmful content, what is that role and how should they be encouraged (or compelled) to go about it? And, for that matter, what exactly is “harmful content”?

These are among the many questions being examined in various jurisdictions; in the UK through its draft online safety legislation, in the US, where the focus has been on amending the notorious Section 230 legislation that has been interpreted by the courts to shield the platforms from responsibility for harmful content they distribute, and in Canada. Australia has already enacted legislation. All are approaching a similar problem but in different ways, although there are some common themes. Let’s look at both the Australian and Canadian approach and see what lessons they may hold.

The Australian legislation, known as the Online Safety Act, was passed in mid-2021 and came into effect in January of this year. Australia was the first country to establish an eSafety Commissioner in 2015. Initially the remit of this office was limited to online safety for children but in 2017 the mandate of the Office was expanded to cover all Australian citizens. With the implementation of the Online Safety Bill, the Commissioner’s role will expand. The Commissioner has an education, research, investigatory and a limited quasi-enforcement role. (i.e. it can issue blocking requests and blocking orders). The new Act updates and consolidates earlier legislation dealing with cyberbullying of children and, in the words of the Australian government;

“…retains and replicates certain provisions in the Enhancing Online Safety Act 2015, including the non-consensual sharing of intimate images scheme; specifies basic online safety expectations; establishes an online content scheme for the removal of certain material; creates a complaints-based removal notice scheme for cyber-abuse being perpetrated against an Australian adult; broadens the cyber-bullying scheme to capture harms occurring on services other than social media; reduces the timeframe for service providers to respond to a removal notice from the eSafety Commissioner; brings providers of app distribution services and internet search engine services into the remit of the new online content scheme; and establishes a power for the eSafety Commissioner to request or require internet service providers to disable access to material depicting, promoting, inciting or instructing in abhorrent violent conduct for time-limited periods in crisis situations.

That is a lot of power, and it would seem similar to what the Canadian government is trying to achieve with its “online harms legislation”. However, even though the proposed online harms definitions in Canada are much more tightly constrained than under the Australian legislation or the draft Online Safety Bill being examined in the UK, there has been a lot of push-back from various groups. In 2021 the Department of Canadian Heritage issued a discussion paper laying out a proposed approach to address harmful online content, inviting public comment. And feedback they got.

As is the case with Australia, the proposed legislation would establish an office to oversee the legislation, called a “Digital Safety Commissioner”. In Canada, the Commissioner’s responsibilities would focus primarily on administration and enforcement, including the power to receive complaints, conduct inspections for compliance, issue public reports and compliance orders, recommend administrative penalties up to $10 million dollars, refer offences for non-compliance to prosecutors with fines reaching up to $25 million, and, in exceptional circumstances, apply to the Federal Court to seek an order requiring Telecommunications Service Providers to implement blocking or filtering in cases involving child sexual exploitation and/or terrorist content. This targeted use of site-blocking, which I wrote about earlier (Site-blocking for “Online Harms” is Coming to Canada), would be used as a last resort and applied to offshore sites beyond the reach of Canadian courts. The Commissioner’s Office would be backstopped by a Digital Recourse Council (the title is self-explanatory) and both the Commissioner and Recourse Council would have the benefit of an industry Advisory Board. Legislation would be restricted to the following five categories of “harms”, all of which fall under the criminal code; (1) terrorist content; (2) content that incites violence; (3) hate speech; (4) non-consensual sharing of intimate images; and (5) child sexual exploitation content.

Not covered are a range of harmful but possibly legal activities and content, such as cyberbullying, defamation, online harassment, disinformation, false advertising and so on, the so-called “awful but lawful” content. Notably, the proposed UK legislation would cover such content, at least insofar as dominant platforms are concerned.

Despite its relatively narrow focus, and the fact it would establish an independent regulator subject to a Recourse Council, the proposed Canadian legislation has been heavily criticized by civil liberties and “internet freedom” groups. Some have objected to the requirement for platforms to inform law enforcement of illegal activities, claiming this would amount to unauthorized surveillance, although the criminal reporting requirement would be limited to cases involving “serious imminent harm” or threats to national security. I find it hard to imagine that any responsible business would deliberately turn a blind eye to such information and the legislation would give them needed legal cover to act. Indeed, it would require them to do so.

Other critics complained that the requirement for platforms to monitor for harmful content on their services and take it down within 24 hours of being flagged would result in censorship, especially given that automated screening mechanisms would likely be used, possibly resulting in “false positives” and overzealous screening. With regard to the proposed remedy of site blocking for offshore sites distributing material that is sexually exploitive of children or which promotes terrorism, critics complained this would violate net neutrality. This is the usual canard trotted out when there is any suggestion that some reasonable controls should be placed on content on the internet, even if illegal and subject to court review. But net neutrality has nothing to do with permitting illegal content to remain online in defiance of court orders.

For some critics, any regulation of the internet is too much. Nonetheless, as is the case with any government regulatory intervention, it is important to get the balance right between necessary oversight and restraint, and the lightest regulatory touch needed to preserve individual freedoms. The consultation process allowed for that input and the results were published in February in a government document titled, “What We Heard”. Somewhat unusually, submissions were not made public, apparently out of concern that some groups would not want their experience with harmful content shared publicly, although other groups released their submissions for public consumption. Instead, the government summarized input from the 422 unique submissions using opaque language such as “some stakeholders”, “multiple respondents”, “a few respondents”, “certain respondents”, “a select few”, and “a majority of respondents” etc. We are left to guess at who said what. It is hard to summarize the input on the multiple elements of the proposal, but one phrase perhaps best encapsulates the feedback received; “Regarding the substance of the proposal, although multiple individuals and organizations welcomed the Government’s initiative, only a small number of submissions from those stakeholders were supportive, or mostly supportive, of the framework as a whole.”

This suggests that there will be a return to the drawing board and perhaps more consultations, but it is clear that the government intends to proceed in establishing measures to help ensure online safety. For one thing, it is in the mandate letter of the responsible minister. This is buttressed by the line in “What We Heard” noting that;

 “Almost all respondents commented that Government regulation has a role to play in addressing online harms and in ensuring the safety of Canadians online.

A digital safety regulator will be part of the solution.

The introduction of the regulatory bodies was broadly supported, as many respondents thought the new regulators seemed fit for purpose.”

Reading through the comments on the various elements of the proposal, it is evident that comments were all over the map, no doubt reflecting the background of various intervenors from individuals (350), civil society (39), industry (19) and academics (13). Then there were roughly 9000 “click and submit” interventions organized by cyberlibertarian lobby groups like Open Media. Many of the submissions contradicted each other. And of course, even counting in the Open Media claque, this is hardly a representative sample of what 36 million Canadians think, and the government knows this. While I can’t prove it, I am willing to bet that most Canadians are fed up with toxic, dangerous, and predatory content on the internet and want the platforms to take some responsibility for what they allow to be distributed. They also think there is a role for government to play in ensuring that the platforms, who until now have managed to duck accountability for content that they permit, sometimes promote, and often indirectly profit from, are held to account. (The UK is proposing an interesting accountability feature; a “designated executive” who will be personally held to account for a new criminal offence of failing to deal with “repeated and systemic failings that result in a significant risk of serious harm to users.”)

Canadians are not the only ones concerned. Online safety is a hot button issue in many democracies. While Australia is leading the way, the UK and Canada are moving ahead at their own pace and the US is addressing (slowly) Section 230 and kid’s online safety. The children’s safety issue even rated mention in the U.S. State of the Union address delivered by President Biden on March 1. It’s important to get the balance right, but equally important not to let the platforms off the hook. With great power (and great profits) comes great responsibility.

© Hugh Stephens 2022. All Rights Reserved.

Author: hughstephensblog

I am a former Canadian foreign service officer and a retired executive with Time Warner. In both capacities I worked for many years in Asia. I have been writing this copyright blog since 2016, and recently published a book "In Defence of Copyright" to raise awareness of the importance of good copyright protection in Canada and globally. It is written from and for the layman's perspective (not a legal text or scholarly work), illustrated with some of the unusual copyright stories drawn from the blog. Available on Amazon and local book stores.

One thought on “Grappling with Online Safety Legislation: How to Hold the Platforms Accountable”

Leave a Reply

Discover more from Hugh Stephens Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading