Canada is set to become the latest country to join a global trend in online safety regulation. As its approach proceeds through public and legislative debate, it is important to understand both how it is shaped by efforts in other countries and how it could influence and impact future efforts around the world.
On 26 February 2024, the Canadian government tabled the landmark Bill C-63, which seeks to enact the Online Harms Act. If passed, it will set in place a legislative scheme to hold a wide range of services accountable for reducing the risk of harmful content online. It also proposes a comprehensive regulatory framework, including the establishment of a Digital Safety Commission to oversee enforcement, an Ombudsperson to support victims of online harms, and a Digital Safety Office to support their operations. Beyond that, the Bill seeks amendments to the Criminal Code and Human Rights Act to introduce new “hate crime” offenses. In this post, I’ve focused on the provisions in Part 1 of the Bill, the Online Harms Act.
The legislative approach of the Act, currently pending its second reading in the House of Commons, borrows from a collective toolbox for online safety that Canada’s counterparts in the European Union, the United Kingdom, and Australia have already largely stocked up. The Global Network Initiative (GNI) has been engaging in several of these law-making processes in the EU, the UK, and Australia, as well as tracking how they have been reproduced in countries such as Vietnam and Sri Lanka. Given this broader trend and the extent to which related regulatory efforts are building on one-another, it is clear that the evolving contours of the Canadian approach is likely to have serious implications for human rights not just in Canada, but around the world.
The Problematic Potential of Strong Authorities with Ample Prosecutorial Discretion
The proposed Online Harms Act applies to social media services accessible in Canada, including services allowing for live streaming or the sharing of user-generated pornographic content, but notably excludes private messaging services from its scope. Compared to the online safety acts in the UK and Australia, the definition of regulated services under Bill C-63 is rather vague, with much of its interpretation – such as the minimum number of users that would bring a service within scope – left to regulation implementation. Additionally, under Article 2.3, the Governor in Council can “make regulations designating a particular social media service” if they are satisfied there is a significant risk of it hosting harmful content. This reliance on secondary legislation to expand on and implement the Online Harms Act, primarily through a Digital Safety Commission, is also noticeable in subsequent parts of the draft law.
The establishment of the Digital Safety Commission with mandated powers to investigate, hear complaints, issue guidelines, and ensure the enforcement of the Online Harms Act is one of the major features of the draft law. Some of its functions are modeled after the Australian eSafety Commissioner, including its discretionary powers and the power to make regulations. GNI was involved in the initial drafting stages of Australia’s Online Safety Bill, which formalized the eSafety Commission, and expressed the need for oversight and reporting requirements to enhance accountability for the role of the Commissioner. The same arguments apply with respect to the Canadian approach, which attempts to further strengthen the Digital Safety Commission with the power to hold hearings (Article 89) and decide on procedural and evidentiary questions (Article 86.d) without clear safeguards for its functions.
This regulatory model of the Commission – combining broad authorities with a lack of oversight – is a recipe for potential abuse and troubling in the global context, especially for privacy and freedom of expression in authoritarian countries. Sri Lanka is a recent example of a country that has adopted an Online Safety Bill which deals a severe blow to human rights. GNI has shared its concerns about the Online Safety Commission established by the Bill that is assigned with broad and unchecked powers to regulate a wide range of categories of content under vague legal offenses. Canada’s approach will inevitably contribute to a global precedent that can either be protective of or violate human rights, and it is therefore important that the government frames the authorities of the Commission as carefully as possible.
Test of the Necessity and Proportionality Principles
Another major aspect is the consideration of whether the Online Harms Act meets the principles of necessity and proportionality, which are two important human rights principles established under international law. Canada has been a steadfast advocate for human rights within the international system, including through its participation in the Freedom Online Coalition and Christchurch Call. Given this role, it is important that the Canadian government clarify how the Act aligns with its interpretations and commitments to privacy and freedom of expression. The following provisions raise particular concerns:
Canada’s Online Harms Act is an effort to create safer and more inclusive online spaces for people, but its long-term effects require careful thought. Technology changes quickly, and new laws might have unforeseen consequences for human rights. Therefore, to ensure the draft law’s effectiveness and protection of human rights, the Canadian government should prioritize expert and multistakeholder consultation prior to its adoption. In 2020, GNI published a policy brief on Content Regulation and Human Rights analyzing more than 20 governmental initiatives that claim to address digital harms. The brief can offer Canadian stakeholders practical guidance on how to formulate and implement content regulations that are effective and enhance and protect the rights to freedom of expression and privacy. The government and Parliament should seek further opportunities to work with a wide range of experts in the field, including those from academia, civil society, and industry as it works to ensure that the Online Harms Act provides a best-in-class global example of a rights-respecting, rights-protecting, and future-proof approach to content regulation.