Return to Case Studies

Case Study: Should social media platforms do more to prohibit harmful speech?

Discussion Prompt

Social media platforms should do more to prohibit harmful speech, even beyond what is limited by the Charter of Rights and Freedoms.

General background

Social media platforms are sometimes described as modern versions of the public square where people can freely share their opinions and engage in discussions. There are ongoing debates, however, over what kinds of speech should be allowed online. 

In Canada, the Charter of Rights and Freedoms guarantees that all Canadians have the right to express their thoughts, beliefs, and opinions in public without fear of censorship or punishment by the government or other authorities. 

While many forms of expression that could be considered hateful or hurtful are protected by the Charter, Canadians’ right to freedom of expression is not absolute, and there are limits on what can be expressed publicly. For example, the Charter does not protect speech that advocates genocide or that incites hatred against an identifiable group.  

The question of freedom of expression becomes more complicated when applied to social media, as the companies that own them are private entities and can have their own policies and guidelines that users are expected to follow.  

Currently, all major social media platforms restrict expression beyond what is limited by the Charter of Rights and Freedoms. For example, TikTok’s policies prohibit content it defines as hate speech, content that promotes or incites violence, as well as harassment and bullying on its platform. When users sign up to use a platform, they must agree to these “terms of service.” These policies can result in users being banned, suspended, or being given limited reach on their posts for a given amount of time. 

While some argue that even current online policies go too far in prohibiting speech, others argue that social media companies need to do even more to limit the spread of harmful content. 

For More Information

Agree

Here are some of the reasons people might argue that social media companies should do more to prohibit harmful speech.

Ethics

While the ultimate goal of most social media companies is to turn a profit, they have an ethical obligation to act responsibly and account for the potential impact of the content that appears on their site. They should want to ensure that their platforms are not used to spread hate speech, misinformation, or other forms of harmful content.

Fostering safe communities

Many people spend as much (or more) time socializing online as they do offline and find value in building community on social media. Keeping social media platforms free of hurtful or harmful speech can help ensure that they remain a space where users can feel safe and comfortable. 

Preventing government regulation

Media companies often choose to impose restrictions on themselves to avoid public scrutiny and possible government intervention. Recently, social media companies have been blamed for things like increasing social polarization, spreading disinformation, and encouraging self-harm. By imposing more limitations on content, companies try to demonstrate that they can deal with these issues on their own and don’t need the government to intervene or regulate them. 

Reputation

Social media platforms need to attract a large number of users, investors, and advertisers in order to make money, so they want to appear as trustworthy places that people can use safely and securely. By limiting the content that appears on their platforms and demonstrating their commitment to protecting users, companies can develop a positive reputation that could lead to increased business success. 

Speed of spread

Because content on social media spreads so quickly and can reach so many people, it is necessary to place more restrictions on online speech than we do on offline speech. These restrictions can help slow the spread of content that can potentially cause serious harm (e.g. medical disinformation) when people are exposed to it on a large scale. 

Read More

Disagree

Here are some of the reasons people might argue that social media companies should not do more to prohibit harmful speech. 

Companies should not dictate what opinions are acceptable

In a democracy, individuals are allowed to express their views, even if they are controversial. While the Charter of Rights and Freedoms places some ‘reasonable’ limits on Canadians’ freedom of expression, these limits are meant to be determined by courts through legitimate legal processes, not by companies.

Difficulties defining harmful content 

Social media content policies can be vague, and it is not always clear whether some content violates a website’s policies or not. People will also disagree about what it means for content to be “harmful”. As a result, decisions about what speech is allowed are too subjective, and social media companies should not decide what is harmful or not.

Difficulties moderating harmful content

Every day, billions of people post on their social media accounts. The sheer quantity of posts to review makes it nearly impossible to take down all harmful content. What’s more, these attempts at moderation can backfire. For example, some companies – like TikTok – use artificial intelligence for content moderation, which often leads to content that is not harmful being censored.

Freedom of expression is a fundamental right

If social media platforms are going to function as a modern public square, then all Canadians have a fundamental right to participate. Canadians should be free to express themselves freely, without fear of being banned from a platform. 

Moderation should be left to the users

Social media provides a space for people to express themselves and exchange their beliefs and points of view with a wide audience. If someone says something socially unacceptable, it is the job of other users – and not the social media company – to explain why that opinion is wrong. Users should determine which ideas they think are best.

Read More