What do we mean by Human Rights Implications Checklist? For whom has this checklist been compiled?
The intention of the task leading to the development of a human rights implications checklist was to develop a tool to help researchers in the MIRROR project reflect on the human rights implications of the (AI) tools being researched and developed within the project.
The first topic of reflection which needs to be considered by researchers in the MIRROR project is that even if technology can perform a certain function/we can get technology to perform a certain function, it does not mean that we should automatically use the technological experiments without considering the (possibly) profound human rights ramifications and real impacts on human lives. The aim is for the Human Rights Implications Checklist to work as a reflection tool for, in particular the computer scientists and technology developers (in WP4, 5, 6 and 7 of the project). Against this background, the Human Rights Implications Checklist is meant to briefly (at a high level) explain the concerns that may arise from a human rights perspective and ask the researchers to reflect on the potential use of the technology they would be developing.
While preparing this checklist, we also saw other potential uses for a Human Rights Implications Checklist, especially in the period of preparation leading to the deployment of the MIRROR tools by a border authority. We will prepare a draft of this (second) human rights implications checklist to be used, tested and validated during the piloting of the software (as planned in WP10). We will add this (second) human rights implications checklist to deliverable D.3.6, reporting on policy recommendations and supporting policy tools.
The creation of a human rights checklist also led us to reflect on the importance of prior knowledge i.e. at the point of setting out to work on the creation of any aspect of an AI-based system, what does the person concerned, especially if not formally trained in the law, know about human rights? This leads us to suggest that all such teams should follow the ‘Highway code principle’. Society does not allow people to take their driving license without first sitting for a theoretical exam which is heavily based on that country’s highway code, which is in turn based on largely universal sets of rules and symbols agreed at international law aimed at ensuring safety while driving, even for illiterate drivers. It is increasingly clear that society should likewise take all steps to ensure that technical and operational staff are not let loose on ‘the information highway’ and AI-based systems are at least being trained and – very preferably tested – in basic human rights law and practice. It is only through such training aimed at achieving human rights literacy, that these actors can be equipped with the right conceptual framework that would help them appreciate the consequences of their work as well as the true meaning of the MIRROR Human Rights Implications Checklist. The checklist cannot be deployed to maximum effect unless the people using it have some form of grounding in basic human rights. A software engineer cannot be expected to properly protect privacy or freedom of expression in his or her work unless and until she or he has received formal instruction ensuring that those concepts are properly understood. So perhaps the first box in a Human Rights checklist for AI workers and other staff in border control contexts should be ‘has the staff successfully undergone basic training in Human Rights law and practice?’ Most recent experience confirms that such essential training cannot be assumed and should be delivered as a matter of priority.
While this approach does not fully address the need for more systematic training, it has helped to raise the awareness of human rights law and practice among MIRROR researchers and compliments the reflections made within the developed Human Rights Implications Checklist.