The impact of digital technologies on migration is often presented as a positive thing. Think of Facebook groups that allow refugees to stay in touch with friends and family in their home countries, or online networks like Informed Immigrant that connect undocumented people to legal services, health care, education, employment, and other life-saving services.
Unfortunately, these examples only tell one side of the story. According to Koen Leurs, a professor of gender, media, and migration studies at Utrecht University in the Netherlands, the majority of new digital technology is used to control migrants rather than protect or empower them. “The situation is quite bleak,” Leurs tells The Progressive. “There has been a huge acceleration in their dehumanization.”
The quickening pace of migration-related tech is one of the key things Leurs learned while working on his new book, Digital Migration, which was published by SAGE in July 2023 and described by one of Leurs’ colleagues as “poised to become a touchstone text” in the field.
Around the world, migrants find themselves serving as the unwilling and unknowing test subjects of technologies that were not invented with their wellbeing in mind. This includes apps like iBorderCtrl, an AI-powered lie detector developed to optimize and automate immigration procedures within the European Union, which saw a spike in immigration during the Syrian Civil War in 2011 and again following Russia’s invasion of Ukraine in 2022.
Migrants find themselves serving as the unwilling and unknowing test subjects of technologies that were not invented with their wellbeing in mind.
Introduced to border patrols in Greece, Hungary, and Latvia between 2016 and 2019, iBorderCtl analyzes facial expressions to determine if refugees are telling the truth during screenings. While journalists reported the algorithm did not always work the way it should, human rights groups warned against its proclivity to discriminate based on race and gender.
iBorderCtrl is not the only AI application to receive funding from the European Union. EUMigraTool—developed through Horizon 2020, a program for research and innovation geared towards amping up security—will use real-time data gathered from websites and social media platforms to predict the size, direction, and even the societal consequences of emerging migratory patterns. According to a recent study from EuroMed Rights, the program violates European privacy laws and risks “generating and exacerbating assumptions that particular groups present a ‘security risk’ . . . and encouraging punitive responses geared toward the interdiction of movement.”
Many of these technologies could be described as dystopian. In 2017, as part of an initiative to incorporate blockchain technology in the administration of humanitarian aid, the United Nations World Food Program introduced a system that required more than one million refugees in Jordan and Bangladesh to obtain their allotted food, water, and medical supplies by scanning their irises.
Although the Black Mirror-esque initiative saved vendors a reported $2.5 million in transaction fees, it was immensely unpopular among the people that were compelled to use it. Petra Molar, a fellow at Harvard University’s Berkman Klein Center for Internet and Society, doubts the technology could have been tested anywhere else. “Imagine what would happen at your local grocery store if all of a sudden iris scanning became a thing,” she told Reuters, “people would be up in arms. But somehow it is OK to do it in a refugee camp.”
Pushback against potentially harmful digital technologies is less widespread in the United States, where privacy laws are more lenient and migration policy is stricter. Still, pushback does occur. In August 2023, human rights advocates sued Customs and Border Protection for having migrants request entry into the United States via a smartphone app called CBP One.
“It’s unfathomable that a refugee who just traveled across nine different countries with only the clothes on their back would somehow have access to a very expensive smartphone,” Angelo Guisado, an attorney on the case, told The Guardian. Although the app is available in five different languages, including Spanish, the translations are filled with confusing errors. On top of this, the app frequently glitches, making it impossible to use.
Like many, Leurs is particularly worried about the future of AI, which he describes as a “black box” the machinations of which cannot be easily understood by the people that operate it. By digitizing border control, government agencies risk creating a situation where asylum seekers are turned away not because they pose an actual security threat, but because a malfunctioning piece of machinery claims that they do. (A reporter from The Intercept tested iBorderCtrl and was falsely accused of lying to as many as four out of sixteen questions; if this happened to an immigrant, they would be flagged for further questioning).
Civil society groups have been working to avoid such a scenario. Under pressure from voters, the European Union is now considering the final drafts of the so-called AI Act which, if approved, could place a permanent ban on facial and emotional recognition technologies like iBorderCtrl, biometric categorization systems like the World Food Program’s iris scanner, predictive policing software like EUMigraTool, and any other type of AI-powered tool designed to manipulate the politically and economically vulnerable.
While Leurs is pleased with this development, he believes a great deal of the heavy lifting must be done by researchers, whom he says should realize that technology is neither inherently neutral nor modernizing, and who need to consistently bring up ethical questions like: “For whom are we gathering knowledge?”, “Who will be affected by our research?” and, most importantly, “Who will benefit from this project, governments and corporations, or the migrants themselves?”