As Governments Build Advanced Surveillance Systems to Push Borders Out, Will Travel and Migration Become Unequal for Some Groups?
A cyberattack on a little-known U.S. Customs and Border Protection (CBP) subcontractor in June 2019 exposed the country’s secretive and increasingly complex border surveillance system. The hack disclosed tens of thousands of travelers’ facial images, license plate numbers, and the technology used to capture this information, including facial-recognition cameras, security system blueprints, and maps of ports of entry, demonstrating the growing role of—and reliance on—technology in border surveillance.
The enlargement of border management systems is not only a phenomenon in the United States; governments worldwide are looking for faster, more precise ways to secure and manage legitimate travel and cargo through international borders. Global travel is increasing, with the International Air Transport Association expecting 7.2 billion passengers in 2035, nearly double the 3.8 billion passengers who traveled by air in 2016. Government authorities, therefore, have significant concerns about how to manage entry and exit at borders—both for short-term travelers and would-be immigrants. Yet the increasing sophistication of the resulting data systems, their powerful integration across databases, and the sharing of some information between agencies and even countries has happened almost entirely behind closed doors, without public debate.
Box 1. The Mobility Paradigm
The mobility paradigm explores the movement of people and goods, and the social implications of this movement, beyond traditional concepts such as migration and trade. This framework considers tourists, international students, business travelers, migrant workers, and those living in borderlands. It addresses those who cross international and national borders at different rates, at different speeds, and for distinct purposes. Mobility also brings to light those who are not mobile—the immobile.
Global mobility has been on the rise for decades. However, securitizing mobility took a sharp turn after September 11, 2001. There was a sudden realization that the systems in place were not fit for purpose and there was an explicit need for higher security in travel. A comprehensive border management system that integrated databases, physical security, and data tracking, as well as communication across relevant government bodies was lacking. This was especially visible in air travel, where physical screening was increased, and agencies such as the U.S. Transportation Security Administration (TSA) were created. As authorities recognized that they could not tackle the issue on their own, new programs and security efforts proliferated across continents, with information-sharing countries becoming a cornerstone of travel securitization.
Emerging technologies have been key tools in this endeavor. Data gathering and risk analysis have been enhanced and enlarged with new automation. These technologies have also allowed governments to apply security mechanisms more discerningly and discreetly—they may pick and choose which travelers or migrants should receive extra screening. As vigilance has decreased on certain travelers, it has increased on others. Furthermore, as more countries share data, a visa rejection decision in one country could doom a person’s travel plans to another—all happening in a behind-the-scenes process that does not allow for input, or challenge, by the individual.
This article examines how data gathering and risk analysis have contributed to states’ sifting of people on the move, the externalization of borders, the secrecy of such systems, and the ramifications for mobility. Although the world is more mobile overall, it is less equally so, as certain groups find travel significantly more accessible while others are shut out of regular routes and rendered immobile due to their personal combinations of nationality, gender, or finances—or their inability to show proof as such.
The Increasing Use of Risk Analysis and Data in Border Management
Risk analysis (also known as risk assessment) allows governments to decide the risk level of a traveler or intending migrant, predicated on the anticipatory nature of a perceived threat, such as crime and negligent behavior. While risk analysis does not require high-tech technology to categorize people into risk categories, algorithmic programs and machine learning have become integral to border management because of the commonly held assumption that they are less biased than direct human analysis.
A risk analysis of incoming travelers and migrants could, with a single programming maneuver, flag a number of people on a flight manifest for extra screening upon arrival, or keep them from getting on the plane. This is not a new concept; security authorities in Israel have used passengers’ data to profile and filter them for distinct securitization experiences for decades. Israel, however, has only two international airports and considers border security a primary national concern, alongside having a highly developed tech private sector. Other countries have invested more in high-tech risk assessment tools in the years since the 9/11 attacks.
Box 2. Machine Learning and Algorithms
Machine learning is a process where a computer uses algorithms and statistical models to provide an impartial result on data. It relies on inputs and patterns found in the data. The machine runs independently and can be supervised or unsupervised. The machine learns over time and refines its outputs as it receives more inputs.
Risk-analysis algorithms require data and intelligence inputs and government agencies, in turn, must find ways to access these inputs: in June 2019, for example, the U.S. State Department announced a new requirement for foreign travelers seeking a visa and would-be immigrants to share social media account names and email addresses up to five years old. Other countries access inputs via agreements with countries that already have the data. The United Kingdom is able to obtain personal and immigration data from the United States due to a 2013 information-sharing agreement. Regardless of how the inputs were gathered, these data, once compiled in one place, become that migrant or traveler’s data double (see Box 3).
A person’s data are put into databases and then categorized. Data points are used to evidence certain traits, such as a medical record to signify good (or bad) health and a university diploma to signify level of education. Poor financial records, for example, could be taken as evidence of irresponsibility or inability to keep a job.
Box 3. Creation of the Data Double
A data double is a term used to refer to the compilation of a person’s data points, potentially alongside his or her digital footprint (the digital markers people leave behind on the internet). It includes information such as passport information; biometrics, which are biological traits used for recognition (such as fingerprints and pictures of an individual’s face); and information such as financial and vaccination records and proof of employment.
Depending on the amount of data accessible or requested, it could also include social media posts online and travel history.
The data double is meant to create the most thorough profile of a traveler or migrant. This profile is processed through algorithms to assign risk levels and predetermine the aims of the traveler or migrant. However, since algorithms are created by people, they carry the biases of the original creator, rendering the process imperfect. The meanings assigned to specific data points, such as the significance of previous visa rejections, is decided by the creators. In addition, as the machine learns from itself, patterns related to categorization become more fixed, further stratifying travelers. For instance, according to U.S. officials, the addition of Nigeria to the list of countries facing heightened travel restrictions in January 2020 was said to be due to a heightened risk environment in Nigeria and the number of Nigerian visa overstays. The U.S. Department of Homeland Security (DHS) explicitly noted the use of an assessment model to rank countries’ performance against their criteria on information sharing, identity-management systems, and public-safety risks. It is not publicly known whether this assessment model includes machine learning or algorithms, but it seems likely that they would be in use. In such a scenario, if several Nigerians committed a terrorist act in the United States, it likely would raise barriers for Nigerian would-be immigrants as the machine would input a higher level of risk for Nigerians. Moreover, others with similar profiles to the terrorists, such as men of a similar age, from a similar area, or with a similar background could also find themselves moved into higher-risk categories.
Yet a data double is not a person: humans cannot be fully summarized through data because certain traits, such as ambition or respect for the rule of law, cannot be quantitatively measured and assured. Therefore, the system denies a holistic approach and instead opts for reinforcing assumptions about what certain data points mean. Indeed, algorithms are not only vulnerable to error and discrimination but may compound bias due to the nature of the system.
The human effect on risk analysis is evident: U.S. Immigration and Customs Enforcement (ICE)’s Risk Classification Assessment was reprogrammed in 2018 to remove the “release” option for unauthorized migrants, creating a system that always recommended detention. When the system was first introduced in 2013, it was modeled on other criminal justice reforms to reduce pretrial detention. Even prior to the reprogramming in 2018, however, the system hardly ever recommended “release”—with one DHS Office of Inspector General report finding release was suggested only 0.6 percent of the time between July 2012 and December 2013. DHS called the system ineffective in managing complicated cases In Canada, Immigration, Refugees and Citizenship Canada (IRCC) has been working to procure a predictive analytics system for cases such as preremoval risk assessments. The efforts have been heavily criticized by immigrants’ rights activists and the Citizen Lab at the University of Toronto for issues of bias and possible discrimination.
Channeling through Data
Still, government authorities continue to use data gathering and risk analysis as primary tools to categorize people on the move. This categorization, or migration channeling, creates distinct channels to process travelers. These channels are often marked by different levels of security and requirements. Risk analysis aids in channeling travelers into groups of different risk levels so that governments can apply resources accordingly.
Authorities aim to channel migrants as far away from their physical borders as possible to minimize flows of “unchecked” persons or overburdening at the border. Externalization, therefore, makes the border the last line of defense or management. Projecting the border outward occurs through visa centers and consulates, through trusted and frequent traveler programs, and by partnering with local actors.
The U.S. Electronic System for Travel Authorization (ESTA) was implemented in 2007 based on recommendations from the 9/11 Commission. ESTA gives authorization to travelers from countries in the Visa Waiver Program—countries already deemed as lower risk. Canada introduced an Electronic Travel Authorization in 2015. The European Commission introduced a registered traveler program in its 2013 Smart Borders Package, although at the time of writing it had been put on pause. Data collection undergirds these systems: in the United States, an electronic passport is required to take part in the visa waiver program because an e-passport can be scanned and checked against and recorded in databases.
Trusted Traveler Programs
Trusted and frequent traveler programs—which have multiplied in recent years—utilize voluntary data sharing, such as fingerprints, travel history, and personal information found in passports; many programs also include an in-person interview. In return, government authorities offer faster trips through ports of entry. Such programs include the UK registered traveler program, Hong Kong’s e-channel, and Germany’s Easypass. In a recent effort by CBP to implement a biometric entry/exit system, the agency has been scanning faces departing from and arriving at select U.S. airports. Travelers with Global Entry, usually required to scan their fingerprints upon arrival at U.S. customs, may now enter without even presenting their passport if they had their face scanned before boarding. This facial-recognition technology is a part of the technological measures to make mobility easier for some, while raising barriers for others.
These efforts may also give other countries advanced warning and prevent unwanted people from reaching the border or entering the country. Some industrialized countries partner with neighbors: The United States has multiple data-sharing initiatives with Canada and Mexico. Frontex, the European border and coast guard agency, has set up risk-analysis cells across the African continent, with the first appearing in Niger, where Frontex trained local officers who analyze data on cross-border crimes, including unauthorized entry. The data gathered are shared with Frontex and regional governments. Countries looking to project their borders out are invested in neighboring states developing data systems so that their governments then have more access to data to be analyzed and assessed. Data management is seen as a requisite for migration policy, evidenced by the International Organization for Migration (IOM) Migration Information and Data Analysis System (MIDAS), which the organization created for states with limited revenue. MIDAS is active in 20 countries, including Niger.
Intelligent Border Controls
When travelers reach ports of entry at international borders, risk assessments still occur. Multiple states are currently experimenting with risk analysis there. The European Union (EU) funds the development of an intelligent control system: iBorderCtrl, which is built to detect deception based on biometric cues. In the current tests, iBorderCtrl gives the customs agent a risk number that is not visible to travelers. It is not known how this number is weighed when deciding entry. In the United States, future attribute screening technology (FAST) supposedly screens travelers for ill intent. Both iBorderCtrl and FAST claim to act as lie detectors based on weak grounds.
Experts who work on issues of deception detection and artificial intelligence have generally called into question the ability of a machine to detect lies and to read motivations solely from physical cues. In Canada, the Scenario Based Targeting System (SBT) uses algorithms to assess individuals’ levels of risk. The European Union’s Court of Justice highlighted SBT in a 2017 opinion on the proposed EU-Canada Passenger Name Records data-sharing agreement, noting its algorithmic nature and inconsistency with EU fundamental rights; a Canadian Border Services Agency audit noted that SBT collected and retained data not directly related to the program’s stated purpose and that privacy risks were not sufficiently mitigated by law enforcement and intelligence partners.
Privacy, Accountability, and Oversight Concerns
Governments are relying more and more on risk-assessment technologies, yet migrants and travelers continue to be in the dark about how decisions are being made, where to appeal them, and how they are being overseen. The June 2019 CBP hack exemplifies problems that accompany the collection of such sensitive data, particularly amid a growing nexus of for-profit business companies engaging with government agencies. Some authorities have purposefully shared sensitive data: in July, it was found by the EUobserver that the United Kingdom had been illegally copying people’s data from the Schengen Information System for years and was sharing it with contractors such as IBM. Even within the EU-Niger deal, there are risks of breaches due to Niger’s weak privacy laws.
Furthermore, the data being shared, purposefully or via hack, are more foundational than ever before. Fingerprints, iris and facial scans, and license plate numbers leave travelers open to identity theft and other concerns. Also, if an inputting mistake is made, it becomes much more difficult to prove it is wrong and correct it.
Concerns go beyond the safety of information provided to government authorities and contractors. If a traveler is preemptively blocked or her visa is rejected, she lacks a method of redress. Authorities do not want to share which data points are highlighted as risky due to security concerns, even while certain practices could constitute profiling or discrimination. In June 2019, when a risk assessment streaming tool used for visitor visas and entry clearances for settlement was criticized, the UK Home Office refused to share information about how risk was assessed or how the algorithm was updated. This type of situation leaves many people without an answer, and without access to regular channels of movement. Even within governments, it is unclear how these systems are being overseen and to what extent they are being relied on to make crucial decisions about people’s access to mobility. Ultimately, the lack of transparency in how risk is determined and what factors are being considered renders travelers and migrants unable to access to their profiles or confirm why they were rejected.
The Mobile and the (Im)mobile
Some prospective travelers or migrants never reach the step of risk analysis in the visa system or at the border. Government authorities have created hypothetical risk profiles based on unwanted behavior, such as visa overstays or crime. These “risky profiles” are sent to consulates and contracted visa processors as guidance for the types of travelers that should be flagged or rejected, prior to even obtaining the visa. Unbeknownst to potential travelers, their proximity to unknown profiles could keep them from accessing regular channels.
Ultimately, these practices separate who becomes extremely mobile and who is rendered immobile. Some individuals may not have enough data available to share in order to travel. This is particularly the case for those who do not have access to internationally accepted government identification documents or lack robust financial records, inordinately affecting older, more rural, and less formally educated people in developing countries, who are already playing catch-up within the global “digital divide.” For others, their data double matches too closely to an algorithm-created risk profile. Risk analysis is predicated on the anticipatory nature of a perceived threat, such as crime and negligent behavior. If one government’s algorithm assumes this, a negative decision may then be shared with another, especially as these programs are becoming more integrated. For example, the Five Country Conference of the United States, United Kingdom, New Zealand, Australia, and Canada shares passenger information, including visa rejection information. And the United Kingdom is increasing the number of countries’ nationals eligible to use ePassport gates at ports of entry. What this could amount to is a global dividing up of approved and unapproved countries, making mobility from countries deemed troublesome, such as Iran or Cuba, even more difficult. Indeed, it is already evident in U.S. heightened travel restrictions, which explicitly mention noncooperation with U.S. authorities on information sharing as a reason for restriction. On the other hand, individuals from “cooperative”’ countries may still find themselves immobile: in one case identified by French academic Didier Bigo, women from a Côte d’Ivoirian village were unable to enter the European Union due to prostitution charges lodged against some villager émigrés ten years prior.
While claims that the world is becoming more mobile abound, it is not an evenly occurring phenomenon. The requirements to become mobile are growing insurmountable for some who are deemed undesirable. Their mobility is not only limited in spaces they have tried to access, but also in places where their data and risk levels have been accessed, constructing a pan-national network of data algorithms. This exclusion may lead them to turn to irregular means to enter a country, further compounding risk—not only for themselves but others who “look” like them.
Notwithstanding these concerns, data gathering and sharing, predictive tools, and other emerging technologies are increasingly common in border management in countries across the globe. Their rise has happened without much in the way of public debate, with questions over privacy, protection of data, and accountability of concern globally in a multitude of systems. Whether governments eventually find this system useful will depend on how immobile the immobilized stay, how many benefits the mobile are reaping, and their continual, free flowing access to data.
Alba, Davey. 2019. The U.S. Government Will Be Scanning Your Face At 20 Top Airports, Documents Show. BuzzFeed News, March 11, 2019. Available online.
American Civil Liberties Union. N.d. Border Security Technologies. Accessed October 1, 2019. Available online.
Amoore, Louise. 2011. Data Derivatives: On the Emergence of a Security Risk Calculus for Our Times. Theory, Culture & Society, 28 (6): 24–43.
Andersson, Ruben. 2016. Hardwiring the Frontier? The Politics of Security Technology in Europe’s “Fight against Illegal Migration.” Security Dialogue 47 (1): 22-39.
Andersson, Ruben and David Keen. 2019. Partners in Crime? The Impacts of Europe’s Outsourced Migration Controls on Peace, Stability, and Rights. London: Saferworld. Available online.
Andrijasevic, Rutvica and William Walters. 2010. The International Organization for Migration and the International Government of Borders. Environment and Planning D: Society and Space 28: 977-99.
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. Pro Publica, May 23, 2016. Available online.
BBC News. 2019. U.S. Demands Social Media Details from Visa Applicants. BBC News, June 1, 2019. Available online.
Bigo, Didier. 2014. The (In)securitization Practices of the Three Universes of EU Border Control: Military/Navy - Border Guards/Police - Database Analysts. Security Dialogue 45 (3): 209-25.
Blair, David. 2016. Israel’s Risk-Based Approach to Airport Security 'Impossible' for European Airports. The Telegraph, May 20, 2016. Available online.
Bolt, David. 2017. An Inspection of Entry Clearance Processing Operations in Croydon and Istanbul November 2016 – March 2017. London: Independent Chief Inspector of Borders and Immigration. Available online.
Egbert, Simon and Bettina Paul. 2019. Preemptive “Screening for Malintent:” The Future Attribute Screening Technology (FAST) as a Double Future Device. Futures 109: 108–16.
European Commission. 2018. Smart Lie-Detection System to Tighten EU’s Busy Borders. Updated October 24, 2018. Available online.
Frontex. 2018. Frontex Opens First Risk Analysis Cell in Niger. News release, November 27, 2018. Available online.
---. 2019. Frontex Opens Risk Analysis Cell in Senegal. News release, June 13, 2019. Available online.
Gerstein, Daniel et al. 2018. Managing International Borders: Balancing Security with the Licit Flow of People and Goods. Santa Monica, CA: RAND Corporation. Available online.
Harwell, Drew. 2019. Hacked Documents Reveal Sensitive Details of Expanding Border Surveillance. The Washington Post, June 21, 2019. Available online.
Huysmans, Jef. 2006. The Politics of Insecurity: Fear, Migration, and Asylum in the EU. Abingdon, UK: Routledge.
International Air Transport Association (IATA). 2016. IATA Forecasts Passenger Demand to Double Over 20 Years. Press release, October 18, 2016. Available online.
International Organization for Migration (IOM). N.d. Migration Data Management, Intelligence, and Risk Analysis. Accessed October 3, 2019. Available online.
Jeandesboz, Julian. 2016. Smartening Border Security in the European Union: An Associational Inquiry. Security Dialogue 47 (4): 292-309.
---. 2017. European Border Policing: EUROSUR, Knowledge, Calculation. Global Crime 18 (3): 256-85.
Kanno-Youngs, Zolan. 2020. Trump Administration Adds Six Countries to Travel Ban. The New York Times, January 31, 2020. Available online.
Martin, Hugo. 2016. Can Israeli-type Security Measures Work at LAX and Other U.S. Airports? Los Angeles Times, July 18, 2016. Available online.
Molnar, Petra and Lex Gill. 2018. Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System. Toronto: International Human Rights Program and the Citizen Lab, University of Toronto. Available online.
Muller, Benjamin J. 2011. Risking It All at the Biometric Border: Mobility, Limits, and the Persistence of Securitisation. Geopolitics 16(1): 91-106.
Nielsen, Nikolaj. 2019. UK Taking 'Steps' after Illegal Copying of EU Schengen Data. EUobserver, July 25, 2019. Available online.
Noferi, Mark and Robert Koulish. 2014. The Immigration Detention Risk Assessment. Georgetown Immigration Law Review 29 (45).
Oberhaus, Daniel. 2018. ICE Modified Its 'Risk Assessment' Software So It Automatically Recommends Detention. VICE, June 26, 2018. Available online.
Office of the Privacy Commissioner of Canada. 2017. Canada Border Services Agency—Scenario Based Targeting of Travelers—National Security. Ottawa: Office of the Privacy Commissioner of Canada. Available online.
Regulation of the European Parliament and of the Council amending Regulation (EU) 2016/399 as regards the use of the Entry/Exit System. 2016. European Commission. April 6, 2016. Available online.
Salter, Mark B. 2006. The Global Visa Regime and the Political Technologies of the International Self: Borders, Bodies, Biopolitics. Alternatives 31: 167-89.
---. 2008. When the Exception Becomes the Rule: Borders, Sovereignty, and Citizenship. Citizenship Studies 12(4): 365-80.
Scheel, Stephan. 2013. Autonomy of Migration Despite Its Securitisation? Facing the Terms and Conditions of Biometric Rebordering. Millennium: Journal of International Studies 41 (3): 575–600.
Sharma, Chinmayi. 2019. The National Vetting Enterprise: Artificial Intelligence and Immigration Enforcement. Lawfare Blog, January 8, 2019. Available online.
Sheller, Mimi and John Urry. 2006. The New Mobilities Paradigm. Environment and Planning A: Economy and Space 38: 207-26.
Sonnad, Nikhil. 2018. US Border Agents Hacked Their “Risk Assessment” System to Recommend Detention 100% of the Time. Quartz, June 26, 2018. Available online.
Statistics New Zealand. 2018. Algorithm Assessment Report. Wellington: Statistics New Zealand. Available online.
UK Secretary of State for Foreign and Commonwealth Affairs and U.S. Department of Homeland Security. 2013. Agreement between the Government of the United Kingdom of Great Britain and Northern Ireland and the Government of the United States of America for the Sharing of Visa, Immigration, and Nationality Information. April 18, 2013. Available online.
U.S. Department of Homeland Security (DHS). 2017. Privacy Impact Assessment Update for the Automated Targeting System. Last updated July 29, 2019. Available online.
---. 2019. Five Country Joint Enrollment and Information-Sharing Project (FCC). Last updated May 10, 2019. Available online.
---. 2020. 2020 Travel/Visa Restrictions. January 31, 2020. Available online.
DHS Office of Inspector General. 2015. U.S. Immigration and Customs Enforcement's Alternatives to Detention (Revised). Washington, DC: DHS, Office of Inspector General). Available online.
---. 2018. ICE’s Inspections and Monitoring of Detention Facilities Do Not Lead to Sustained Compliance or Systemic Improvements. (Washington, DC: DHS, Office of Inspector General). Available online.
U.S. Department of State. 2019. Collection of Social Media Identifiers from U.S. Visa Applicants. Last updated June 4, 2019. Available online.
---. N.d. Visa Waiver Program. Accessed September 3, 2019. Available online.
Warrell, Helen. 2019. Home Office under Fire for Using Secretive Visa Algorithm. Financial Times, June 9, 2019. Available online.
World Economic Forum. 2018. The Known Traveler: Unlocking the Potential of Digital Identity for Secure and Seamless Travel. Geneva: World Economic Forum. Available online.
Yong, Ed. 2018. A Popular Algorithm Is No Better at Predicting Crimes Than Random People. The Atlantic, January 17, 2018. Available online.
Zandonini, Giacomo. 2019. Biometrics: The New Frontier of EU Migration Policy in Niger. The New Humanitarian, June 6, 2019. Available online.