The 2021 UN Report on “The Right to Privacy in the Digital Age” expresses the concerns of the United Nations High Commissioner for Human Rights about the negative impact that can derive from the “improper” and “defective” use of data by artificial intelligence (AI) systems, including profiling, automated decision-making, and machine-learning technologies.

In particular, the application of AI tools has given rise to concern in four key areas:

  • law enforcement,
  • national security,
  • criminal justice
  • border management

Also, the report denounces the “worrying developments, including a sprawling ecosystem of largely non-transparent personal data collection and exchanges that underlies parts of the AI systems that are widely used”.

“The risks of artificial intelligence to privacy are one of the most pressing human rights issues we face.” This is the appeal that Michelle Bachelet launched during the presentation of the latest report on the right to privacy in the digital age, during the 48th working session of the UN Human Rights Council, held between 13 September and 1 October. 2021.

The right to privacy is a fundamental right recognized by the most important international instruments: the Universal Declaration of Human Rights, in article 12; the UN Covenant on Civil and Political Rights, in article 17; the Convention on the Rights of the Child, in article 16; the International Convention on the Protection of Migrant Workers and Their Families, in article 14; the Convention on the Rights of Persons with Disabilities, in Article 22; the African Charter on the Rights and Welfare of the Child, in article 10; the American and European Conventions on Human Rights, respectively Articles 10 and 8.

Furthermore, the right to privacy is “an expression of human dignity and is linked to the protection of the autonomy of the individual and his personal identity” and, as such, must be recognized by all without exception.

Among all the possible consequences on human rights, two considerations emerge. The first concerns the collection and dissemination of data, and the second refers to the use of these systems in sectors deemed strategic.

In the first case, the vast data-sets available to AI often include personal data, therefore there is a risk of facilitating the intrusion into people’s privacy as well as the “opaque” or unclear dissemination of data, as well as their exposure to illegal uses. In the second, however, the same risks would be amplified when the systems were used in the key sectors already mentioned: national security, law enforcement, criminal justice, and management of national borders.

Bachelet’s final recommendations are addressed to States and businesses and focus on the design and implementation of safeguards to prevent and minimize harmful outcomes and facilitate the full enjoyment of the benefits that artificial intelligence can provide.

In particular, States should:

(a) Fully recognize the need to protect and reinforce all human rights in the development, use and governance of AI as a central objective, and ensure equal respect for and enforcement of all human rights online and offline;
(b) Ensure that the use of AI is in compliance with all human rights and that any interference with the right to privacy and other human rights through the use of AI is provided for by law, pursues a legitimate aim, complies with the principles of necessity and proportionality and does not impair the essence of the rights in question;
(c) Expressly ban AI applications that cannot be operated in compliance with international human rights law and impose moratoriums on the sale and use of AI systems that carry a high risk for the enjoyment of human rights, unless and until adequate safeguards to protect human rights are in place;
(d) Impose a moratorium on the use of remote biometric recognition technologies in public spaces, at least until the authorities responsible can demonstrate compliance with privacy and data protection standards and the absence of significant accuracy issues and discriminatory impacts, and until all the recommendations set out in A/HRC/44/24, paragraph 53 (j) (i–v), are implemented;
(e) Adopt and effectively enforce, through independent, impartial authorities, data privacy legislation for the public and private sectors as an essential prerequisite for the protection of the right to privacy in the context of AI;
(f) Adopt legislative and regulatory frameworks that adequately prevent and mitigate the multifaceted adverse human rights impacts linked to the use of AI by the public and private sectors;
(g) Ensure that victims of human rights violations and abuses linked to the use of AI systems have access to effective remedies;
(h) Require adequate explainability of all AI-supported decisions that can significantly affect human rights, particularly in the public sector;
(i) Enhance efforts to combat discrimination linked to the use of AI systems by States and business enterprises, including by conducting, requiring and supporting systematic assessments and monitoring of the outputs of AI systems and the impacts of their deployment;
(j) Ensure that public-private partnerships in the provision and use of AI technologies are transparent and subject to independent human rights oversight, and do not result in abdication of government accountability for human rights.

States and business enterprises should:

(a) Systematically conduct human rights due diligence throughout the life cycle of the AI systems they design, develop, deploy, sell, obtain or operate. A key element of their human rights due diligence should be regular, comprehensive human rights impact assessments;
(b) Dramatically increase the transparency of their use of AI, including by adequately informing the public and affected individuals and enabling independent and external auditing of automated systems. The more likely and serious the potential or actual human rights impacts linked to the use of AI are, the more transparency is needed;
(c) Ensure participation of all relevant stakeholders in decisions on the development, deployment and use of AI, in particular affected individuals and groups;
(d) Advance the explainability of AI-based decisions, including by funding and conducting research towards that goal.

Business enterprises should:

(a) Make all efforts to meet their responsibility to respect all human rights, including through the full operationalization of the Guiding Principles on Business and Human Rights;
(b) Enhance their efforts to combat discrimination linked to their development, sale or operation of AI systems, including by conducting systematic assessments and monitoring of the outputs of AI systems and of the impacts of their deployment;
(c) Take decisive steps in order to ensure the diversity of the workforce responsible for the development of AI;
(d) Provide for or cooperate in remediation through legitimate processes where they have caused or contributed to adverse human rights impacts, including through effective operational-level grievance mechanisms.

Leave a Reply

Your email address will not be published. Required fields are marked *