Session co-organized by the Working Group on business and human rights and 7amleh.
Interpretation in English, French and Spanish
(Version française ci-jointe) (Versión en español adjunta) Brief description of the session: From user-facing platforms to humanitarian tools and services, the tech sector, and in particular social media, has an ever-increasing impact on individuals and groups in the most vulnerable situations, including people of African descent and communities of colour. The failure to address the dissemination of hatred that incites discrimination on social media seriously undermines the promise of business and human rights - that is to provide a framework to prevent and remedy abuses by businesses, including tech companies, of internationally recognized human rights. Where online content moderation systems fail to effectively detect such content, it can risk increasing incitement to violence, and can hinder the enjoyment of a variety of human rights online and offline, including the right to life, the right to physical integrity, the right to health, freedom from non-discrimination,. As the Secretary General of the United Nations indicated in the organization’s Strategy and Plan of Action on Hate Speech, hate speech is a menace to democratic values, social stability and peace.
Regardless of investment and resources, social media companies face specific challenges in their content moderation efforts to mitigate the spread of such harmful content. Electronic communication services, social media platforms and search engines provide an ideal environment for the delivery of a range of narratives, including those that may constitute incitement to discrimination and violence.. Individuals or groups systematically targeted by incitement to violence or discrimination, including racist attacks, are generally left without any effective means of defense, escape or self-protection, and often find themselves in situations of enhanced vulnerability. There is an increasing recognition of the deep impact of such systemic oppressions on mental health. As various studies have shown, harassment alone in comparatively limited environments can expose targeted individuals to extremely elevated and prolonged levels of anxiety, stress, social isolation and depression and significantly increases the risk of suicide, which may amount to psychological torture. Broadly speaking, incitement to discrimination and/or violence, including on the basis of race, not only affects targeted groups of people, but exercises greater influence on society at large, exacerbating divisions, fractures and strengthening polarization within society. The above-mentioned elements become more meaningful when we consider the rising importance that young people attach to cyberspace, and the latter's potential to influence their choices and values. In this context, this session will explore the potential of collaboration by different stakeholders to ensure a smart mix of measures that leads to a human rights-respecting approach in online content moderation for social media platforms.
Key objectives of the session:
Analyze current initiatives in the context of content moderation in relation to incitement to hatred and discrimination targeting people of African descent and ethnic and racial minorities on social media.
Identify good practices and challenges, including a smart mix of measures to protect and respect human rights in the context of social media content moderation.
Discuss concrete steps and actions that States, businesses, civil society organizations, and other stakeholders can and should take to implement the UNGPs for social media content moderation, and provide affected individuals and groups with effective access to remedy.
Key discussion questions:
What are the challenges for rights-respecting content moderation, especially in the Global South and in non-English speaking markets?
How can social media companies mitigate potential adverse human rights impacts through rights-respecting content moderation and human rights due diligence processes?