top of page

Bots and Biometrics

By Alina Glaubitz | alina.glaubitz@yale.edu

Approximately 70 million people are on the move due to conflict, political instability, environmental disasters, and economic hardship. As a result, many states and international organizations involved in migration management are employing Artificial Intelligence (A.I.) and biometric data collection to strengthen border enforcement and streamline immigration decision-making. The state’s control of borders through bots and biometrics infringes on immigrants’ rights to privacy, freedom of movement, and freedom from discrimination, as well as places this migrant group at risk of digital deportation – deportation decisions prompted by new technologies.


Ever-increasing responsibilities and processes are being delegated by governments to automated systems, under the pretence of neutrality and accuracy. However, these systems are by no means neutral. Algorithms are vulnerable to the same decision-making flaws that plague humans: transparency, accountability, discrimination, bias, and error. Seven thousand students were wrongfully deported from the United Kingdom, due to an algorithm that falsely accused them of cheating on a language test. Biases embedded by the algorithm’s designer and shortcomings of training datasets can be compounded to create discriminatory outcomes that reproduce patterns of discrimination. Elisa Celis, Assistant Professor of Statistics and Data Science at Yale University, noted that even “a perfect algorithm with no bias is still embedded in a system that is biased.” The increasing use of A.I. in migration management has developed into a means for states to reinforce hierarchies of rights between citizens and non-citizens, exercise control over migrant populations, and default on their responsibilities to uphold human rights.


Data collection is not an apolitical exercise, particularly when powerful actors such as states or international organizations collect the personal information of vulnerable individuals without regulated methods of oversight and accountability. This so-called “data colonialism” results in privacy breaches and raises human rights concerns. In an increasingly anti-immigrant global landscape, migration data has been misinterpreted and misrepresented for political ends.


The increasing use of A.I. bots and biometrics by states to control immigrant populations stems from the need to accelerate the processing of an ever-growing influx of people across globalized borders. Airports in Hungary, Latvia, and Greece have enacted a new pilot project called iBorderCtrl, introducing an A.I.-powered lie detector at border checkpoints. Under the guise of efficiency, faces will be monitored for signs of lying. This raises the concern of whether an algorithm can account for trauma and its effects on memory, or for cultural differences in communication. It is important to consider that facial recognition technologies are flawed when analyzing women or people of color. If facial recognition technologies err in evaluating the faces of minorities, how can they be relied upon to accelerate immigration decisions?


Bots at the Border

Since 2014, Immigration Refugees and Citizenship Canada (IRCC) has been developing a predictive analytics tool to automate the activities of immigration officials and to support the evaluation of immigrant andvisitor applications. Mathieu Genest, a spokesman for Immigration Minister Ahmed Hussen, informed The Canadian Press that the integration of A.I. into immigration decisions at the Canadian border would help “process routine cases more efficiently.” The tool, as reported, evaluates the merits of an immigration application, identifies potential red flags for fraud and weighs these factors to recommend whether an applicant should be accepted or refused. This A.I. tool is, however, subject to the shortcomings of predictive analytics.


Predictive analytics are often based on historical data and are used for data-driven analyses. Predictive policing, for example, refers to any system that analyzes data to predict where a crime may occur, or who will be involved in a crime as either the victim or perpetrator. Immigrants have been found to be more likely than natives to face restrictions such as high bail or pretrial detention, because of the assumed risk that they will flee to their homelands before trial. This assumption does not account for familial, economic, and other ‘sunk cost’ conditions that might impede an immigrant from fleeing the host country. This form of predictive policing highlights the potential uncertainty that comes with an algorithm’s definition of a threat to public safety. Particularly, if it is fed a historically skewed training dataset that does not account for contextual evidence. Professor Celis affirmed the importance of defining accuracy against fairness in assessing algorithmic output. A bot’s denial of entry may be accurate according to its input data, but may not be fair given the immigrant’s circumstances. Without contextual considerations, A.I. bots may not be fit to make accurate, fair, and just immigration decisions.


In April of 2018, the Treasury Board of Canada Secretariat (TBCS) released a White Paper on Responsible Artificial Intelligence in the Government of Canada. The White Paper envisioned a broad range of use cases for AI technologies in government and noted that these applications may be “critical to the fundamental well-being of people, the economy, and the state.” This suggests that governmental use of AI technologies will become instrumental to the social contract, conditioning citizens to agree to be governed in exchange for security and other publicservices from the government, facilitated by A.I. technologies.


The practice of mass data collection is crucial to understanding the risks of algorithmic decision-making in public policy. Immigrants are at risk of disproportionate surveillance of their data and electronic communications, which results in surveillance bias – “dataveillance.” The proposed CSE Act under Bill C-59 would allow almost entirely unrestricted collection of “publicly available information” about individuals, which is broadly defined as information “that has been published or broadcast for public consumption, is accessible to the public on the global information infrastructure or otherwise or is available to the public on request, by subscription or by purchase.” In practice, this may include anything from social media posts, information sold by data brokers, and data obtained through hacks, leaks, and breaches. This act suggests that states privilege national security over individual privacy.


Data collection and information sharing between law enforcement and the immigration system could further accommodate discriminatory profiling. In 2017, the Royal Canadian Mounted Police (RCMP) faced heavy criticism for engaging in religious and ethnic profiling of migrants near the unofficial border crossing between the United States and Quebec. The RCMP collected questionnaires from over 5,000 asylum seekers, featuring questions clearly tainted by Islamophobic stereotypes. The questionnaire sought information about social values, political beliefs, and religion, including questions related to the individual’s perception of women who do not wear a hijab, their opinions on ISIS and the Taliban, as well as the number of times each day the individual prayed. The questions were directly targeted toward Muslim individuals crossing the border, as no questions were included about other religious practices or terrorist groups. Given the fact that values, political opinions, and religious practices are shaped by regional and national contexts, immigrants will be targeted by profiling. The RCMP questionnaire exemplifies the discrimination that results from data fed into algorithms that is colored by both implicit and explicit bias.


The Canada Border Services Agency (CBSA) uses a Scenario Based Targeting (SBT) system to identify potential security threats, using algorithms to process large volumes of personal information (such as age, gender, and nationality) to profile individuals. The system assesses travellers for “predictive risk factors” in areas that include immigration fraud, organized and transnational crime, smuggling of contraband, illegal travel routes, and terrorism. The Canadian Passenger Protect program (also referred to as the “no-fly list”) has been criticized for its false positives and its listing of children. While human judgment in immigration decisions permits individual racial, national, and religious profiling, state’s use of biased algorithms institutionalizes these biases on a mass, automated scale. As such, human and “artificial” judgment must intersect to prevent unjust digital deportation.


Algorithmic definitions of threats to national security are tainted by historical biases, while dataveillance of immigrant populations places the state’s need for national security above the immigrant’s right to privacy. Human judgment must complement artificial intelligence to prevent institutionalized discrimination and consequent unjustified digital deportation.


Biometrics

Biometrics refers to the technology of measuring, analyzing and processing the digital representations of unique biological data and behavioral traits. Biometric identifiers may be based on either physiological patterns such as fingerprints, hand geometry verification, iris, and face or voice recognition, as well as behavioral patterns such as hand-written signature verification, keystroke, or gait analysis. Biometrics are increasingly being seen by states as a means of ascertaining individual identity with greater precision, as well as providing more secure checks on access to both virtual and physical spaces. The particularity of biometrics is their quality as a unique, permanent, and universal imprints of a person’s identity.


The very act of taking fingerprints has been considered an affront to the right to privacy – a violation of physical integrity and the right to a private life. This perception of biometric fingerprinting has been echoed in national and international legislation. Hygienic and cultural concerns further render biometrics an invasive technology. The touching of fingerprint scanners, for example, has become particularly sensitive since the SARS outbreak. In addition, certain cultures are alarmed by the stigma of criminal activity that is attached to fingerprinting.


The use of new technologies in border spaces with power differentials raises issues of informed consent. When refugees in Jordanian refugee camps have their irises scanned in order to receive their weekly food rations, are they able to opt-out? Must they sacrifice their privacy and permit the collection of their personal data to feed their families? It is crucial to recognize that the biometric data of vulnerable populations can be exploited. The World Food Program, for example, recently signed a 45 million USD deal with Palantir Technologies, the same company that has provided technology to support detention and deportation programs run by U.S. Immigration and Customs Enforcement. What will happen with the data of 92 million aid recipients shared with Palantir? The trade-off between complying with biometric data collection and receiving the means to survive in refugee camps may subject this particularly vulnerable category of immigrants to refoulement and re-persecution.


Biometric technology precipitates the ongoing friction between the individual and the state. The delicate balance between an individual’s fundamental right to privacy and a state’s responsibility to defend and control entry to its territory must be safeguarded by principles of data protection. One of the more controversial aspects of biometric technology is the retention of information once it has served its initial purpose. “Function creep,” in which an individual is screened without their knowledge and personal information is used without their consent, may lead to biometrics releasing DNA, racial, religious, or health information about an immigrant that may be used for commercial purposes, or more critically, for tracking and surveillance.


Biometric data collection has implications for the prospects of immigrant citizenship in the host country, as well as immigrants’ right to privacy. Biometric technology is an instrument of state control over who constitutes a member of a nation, while enabling power differentials in border spaces due to non-consensual data collection and invasions of privacy.


From a state level of analysis, the elevated costs of biometric equipment perpetuate inequality between more technologically advanced countries, and less developed states, making the borders of developed states even more impenetrable. Meanwhile the efficiency of bot-based border control systems may alleviate the administrative and personnel burden of border security for developing nations, which are generally more exposed to immigration. This discrepancy also aligns with the question of which actors benefit from these new technologies. States and security agencies benefit more from A.I. bots than immigrant populations. Immigrants do not have a seat at the table, and their rights are infringed upon by these new technologies.


Conclusion

With the increasing use of emerging technologies such as A.I. bots and biometric data collection in border spaces, it is critical to balance efficiency with fairness. Artificial Intelligence overlooks the contextual considerations of immigration decisions, and algorithmic definitions of threats to national security are tainted by historical biases. Technologies that collect data and accelerate its processing entrench xenophobia, therefore human judgment must complement artificial intelligence to prevent institutionalized discrimination. Biometric data collection has implications for the prospects of immigrant citizenship in the host country, as well as immigrant’s right to privacy due to dataveillance and the lack of informed consent in data collection. Immigrants are not the intended beneficiaries of these technologies and must sacrifice their freedom from discrimination, freedom of movement, and right to privacy in order to enter a host country and avert digital deportation.


Recent Posts

See All

Qatar’s Migrant Labor Crisis

By Emiliano Salomón | emiliano.salomon@yale.edu The World Cup draws millions of soccer fans every year. However, few are aware that the upcoming games in Qatar will take place in stadiums built by ex

bottom of page