Following the 2021 TikTok Blackout Challenge, which entails the restriction of one’s breathing for a set duration of time, it has been reported that at least five children, aged from 8 to 12, have died. The parents of three of them sued TikTok after their daughters died doing the challenge. In all the complaints, TikTok is accused of promoting dangerous behaviour by encouraging children to participate in this challenge through its algorithm.
Likewise, in the UK and France, two complaints have been filed by parents following the suicides of their children. In the case of Marie — a French girl who killed herself — the parents filed a complaint against TikTok for incitement to suicide (“provocation au suicide”), non-assistance to a person in danger (“non-assistance à personne en péril”) and propaganda or advertising of ways of killing oneself (“propagande ou publicité des moyens de se donner la mort”). In the UK, the ruling following Molly Rose Russell’s suicide declared that “the internet affected [Molly’s] mental health in a negative way and contributed to her death in a more than minimal way”.
Facebook, Instagram, TikTok, YouTube, Snapchat, Linked In, Pinterest… Social media has become a main instrument of communication, information, and expression. It allows everyone to share their thoughts and ideas, and to seek and share information. Free and informed access to social media is therefore one of the main tools ensuring the right to freedom of expression. This fundamental human right is proclaimed in international and regional treaties, as well as in national laws, and has been described as the cornerstone of democracy. The right to freedom of expression is closely related to the freedom of opinion.
The right to freedom of opinion is a human right recognized and guaranteed, inter alia, under the Universal Declaration on Human Rights and the International Covenant on Civil and Political Rights. It conveys the right to hold an opinion without interference, not to have one’s opinion manipulated and not to be punished for one’s opinion. The influence of social media on how we inform ourselves and forge our opinions raises the question of whether such an influence can be manipulative, thus infringing on the human right to freedom of opinion.
Earlier in the year, 42 attorneys general in the United States filed a lawsuit against META, the parent company of Facebook and Instagram, claiming that the company introduced addictive methods on social media platforms that damage and hurt the mental and physical health of young people. The lawsuits argue that young users are exploited through manipulation by the company for profit.
Social media like Instagram or TikTok use AI algorithms to propose tailored content to their users. To do so, algorithms analyse the activity of each user of the platforms and display content curated to the user’s interests. While this can be seen as a positive aspect because users consume content that fits their interests, it can, in fact, hinder their freedom of opinion significantly. A new term has recently emerged to describe this phenomenon: the so-called filter bubble. According to the Cambridge Dictionary, a filter bubble is “a situation in which someone only hears or sees news and information that supports what they already believe and like […]”. Due to algorithms, the content displayed is tailored to the individual, and thus, the individual is not able to forge a new opinion. Irene Khan — the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression — highlights in her 2021 Report that “the systematic collection of data about users’ activities online and targeted advertising may violate user’s right to freedom of opinion under Art 19(1)”. She adds that “companies promote a system that significantly undermines people’s agency and choice in relation to their information diet”.
The increasing number of complaints lodged against social networks shows that there is a growing awareness of the potential dangers that they present. In Kyrgyzstan, banning TikTok to protect youth’s mental health has been considered. This leads to one tricky question: how should the protection of individuals and the freedom of expression be guaranteed under different international, regional and national regulations?
Irene Khan suggests developing “sufficient publicly available information to enable users, researchers and activists to understand the way in which algorithms promote certain kinds of content” to combat the dangers of algorithms and AI. Protection should be afforded to children, who are particularly vulnerable to manipulation, may be easily influenced, and are at an age of forging and shaping their opinions.