Co-organised by the UN Working Group on Business and Human Rights, UN Human Rights B-Tech Project, Ranking Digital Rights and UN Global Compact Brief description of the session: This session will explore how human rights of different stakeholder groups may be adversely impacted by artificial intelligence (“AI”) and discuss how the smart mix of measures in the implementation of the UN Guiding Principles on Business and Human Rights (“UNGPs”) can address such impacts.
To inform the discussion, this session will present the results of thematic and regional work with regard to children and digital technologies, gender perspectives, engagement with stakeholders in diverse geographies as well as dialogue with policymakers seeking to govern AI.
Key topics to be discussed include: (1) how the development, deployment, and use of AI systems may pose human rights risks to specific stakeholder groups across diverse geographies; and (2) how processes by business enterprises and other actors to address these risks could be informed by the UNGPs and (3) how the UNGPs and stakeholder views can guide States towards adopting an effective “smart mix of measures” for requiring technology companies to respect human rights when developing and using new technologies.
The session will feature panelists representing the UN human rights office, UN Global Compact companies developing and deploying AI , civil society, and other stakeholders.
AI continues to change our information ecosystem and daily modes of working relationships. Information about the extent to which human rights due diligence has been conducted taking into account the specific needs of different stakeholder groups across diverse geographies in relation to value chains in the technology sector is limited. There has been little opportunity for learning across the tech industry and among companies designing, developing and deploying AI about effective approaches to meaningfully engage with diverse stakeholders across different socio-economic contexts with the goal to prevent and mitigate human rights risks linked to advances in AI.
There is thus an urgent need to explore how the voices and needs of stakeholders can be integrated into business operations in relation to AI. Identifying appropriate responses to this question and building alignment across industry, civil society and standard setters about expectations should draw on international human rights standards. In particular, the expectations set out in the UNGPs can provide authoritative and widely accepted guidance. Using these global standards as the initial basis for unpacking the scope and nature of corporate responsibilities can also provide a common foundation for constructive and robust dialogue.
Key objectives of the session: - To present insights about specific needs of stakeholder groups in order for businesses to respect human rights in the AI space, including community-led approaches
- To summarise current promising practices and gaps in AI risk mitigation in diverse geographies.
- To discuss the implications of the UNGPs for AI regulation.
- To propose next steps for policy-makers, businesses and civil society to ensure that AI roll-out globally is conducted in a rights-respecting manner.
Background reading:-
Headlines and Recommendations from the GenAI B-Tech Foundational paper-
Advancing Responsible Development and Deployment of Generative AI. A UN B-Tech foundational paper-
Taxonomy of Generative AI Human Rights Harms, a B-Tech Gen AI Project supplement-
Overview of Human Rights and Responsible AI Company Practice, a B-Tech Gen AI Project supplement- Harvard Carr Center Discussion Paper "
Fostering Business Respect for Human Rights in AI Governance and Beyond: A Compass for Policymakers to Align Tech Regulation with the UNGPs"
- B-Tech Stakeholder engagement
paper-
UN Global Compact: Report on Artificial Intelligence and Human RightsUN Global Compact & Accenture Report: Gen AI for the Global Goals: The Private Sector’s Guide to Accelerating Sustainable Development with Responsible Technology