Taking our point of departure as the designing and development of data analysis algorithms, we need to keep in mind that most of the MIRROR tools are there to enhance understanding by identifying correlations, patterns or casual relationships of data being collected, processed and used for the purpose of assisting border authorities. It is therefore this added meaning and its reliability that we need to keep in mind when we are reviewing human rights implications. There are four types of tool-building evident in the MIRROR platform:
- Tools identifying behaviour through text analysis of social media and other media;
- Tools identifying behaviour through image analysis;
- Tools trying to predict behaviour, perceptions or choices based on the analysis obtained from the text and image analysis;
- Tools integrating the results of all of the above.
An assessment of human rights impacts should take into account both the data used by the data sets and the applied analysis on the data (‘the product’). To take an example concerning data sets, bias may be hidden in the dataset and thus not found in the algorithm itself when analysing it. When considering the applied analysis, we need to consider that there are varying levels of discretion in the choice of relevant determinants, for instance, what training data to use or how to respond to false positives, and that the power of the operator of the algorithm may lie in his or her knowledge of the structure of the data set, rather than in insight into the exact workings of the algorithms.
This is important to consider especially for toolset C. But the predictability of the outcome of an algorithm by the operator is important when considering its accountability and the design of adequate governance structures for its use. It is also important to keep in mind the reliability of all of these tools. Can our border control agency rely solely on the tools presented in MIRROR or is human decision-making equally important? Are we presenting the results of the tools as aids for decision-making or as the end result of decision-making? In each of these choices, resulting from these questions, there may be important human rights implications. As noted extensively in the literature, judging the respective quality in decision-making processes by humans and by algorithms is fundamentally and categorically different. There are different mistakes which may have different outcomes and therefore different consequences.
The Human Rights Implications Checklist is divided into four sections:
- General reflections on the technical process/es being explored
- What data is being collected
- Design choices
- Analysing impacts on human rights
The first three sections are meant to identify clearly the technical process/analysis/algorithm being designed, developed and or applied in your research. The fourth section, based on the contextual information already given, analyses the human rights ramifications and possible real impact on human lives that can arise from that particular technical process/analysis/algorithm.