“The arms industries are saying: ‘This is a security problem, so buy my weapons, buy my drones, buy my surveillance system.’”
– Ozelm Durmiel, Member of European Parliament
In December 2021, the Guardian published an investigation on the hundreds of millions of euro that the EU and its member states pour into the militarisation of our borders.
This happens, even though these tactics don’t work, are harmful, and serve to make coming to the EU more dangerous for people with limited alternatives to arrive via regular channels.
The chillingly intimate relationship between policing and immigration control, enabled by technology, is exposed in the words of a Dutch police officer:
“We have a police app, and if I arrest someone, I can just check their information, and I know right away where they live, what they’ve done, and so on. I pull that out of the system. So if it is an illegal alien, a short message comes up that says ‘Resides illegally in the Netherlands”, so I contact the immigration police.”
The policing of migrants thus is not limited to Europe’s borders but extends into communities – and with technology, beyond even physical spaces into people’s devices and deep into their personal and biometric data. The extension of this web of surveillance increases the power of authorities to monitor – and to act – clandestinely to screen for, identify and deter or deport foreigners meeting certain “risk” profiles.
Meanwhile, civil society is shut out from many of these spaces, and criminalised for using technology to assist those whose lives are put at risk by these policies.
Digital policing at borders
In October 2021, Poland’s parliament approved EUR 350 million for the construction of a 5.5 metre wall along its border with Belarus, equipped with motion detectors and thermal cameras, responding to Belarusian authorities’ strategy of encouraging migrants to cross the border after the EU imposed sanctions against the country for human rights violations.
In October 2021, Poland’s parliament approved EUR 350 million for the construction of a 5.5 metre wall along its border with Belarus, equipped with motion detectors and thermal cameras
This type of surveillance apparatus is not new. Since 2013, Frontex, the EU’s border and coast guard agency, has run operation Eurosur, a framework for information exchange and cooperation between member states and Frontex to prevent irregular migration and cross-border crime through the use of drones, vessels, manned and unmanned aircraft, helicopters and satellites with radar systems, thermal cameras and high-tech sensors.
Eurosur was itself inspired by Spain’s surveillance system Sistema Integrado de Vigilancia Exterior (SIVE), which, since 2000, has been used to monitor the Spain-Morocco border and Spanish territorial waters using radar technology, high-tech cameras, vessel automatic identification system, and border guards. This was later expanded through partnerships with Senegal, Mali, Ghana, Ivory Coast, Cape Verde, Guinea-Conakry, Gambia, Nigeria, Guinea-Bissau, Mauritania and Morocco.
The use of surveillance to monitor human mobility is not limited to the sea or to land borders. In Slovenia, the police systematically gather the data (Passenger Name Records) of passengers for all flights arriving from third countries and EU member states into Slovenia, which is matched against “other police data” like criminal files. The police reportedly acquired information on nearly 800,000 airline passengers between October 2017 and November 2018, prompting the Slovenia Human Rights Ombudsman and Information Commissioner to file a complaint with the constitutional court challenging the practice.
In December 2021, following a challenge to the mass surveillance measures carried out under the EU’s PNR Directive (2016/681) by human rights advocates, Belgium’s constitutional court asked the EU Court of Justice whether the EU’s PNR Directive (2016/681) (Passenger Name Record data) is compatible with the Charter of Fundamental Rights.
An EU-funded pilot project by a company called iBorderCtrl introduced artificial intelligence (AI) -powered lie detectors at border checkpoints in airports in several member states, intended to monitor people’s faces for signs of lying, and to flag individuals for further screening by a human officer. This pilot project has been challenged in the EU’s Court of Justice by Member of the European Parliament Patrick Breyer for the secrecy of its documents, believed to contain information on the algorithms underlying the technology.
In September 2020, the European Commission published its Pact on Migration and Asylum, along with a host of legislative proposals. Among other things, the Pact proposes a mandatory “pre-entry” screening at the external border for anyone who enters the EU irregularly for security, health, and vulnerability checks. Significantly, this screening would also apply to people already present in the EU, if they got here irregularly.
Digital policing in migration procedures
In her 2020 report on racial discrimination and the use of technology in the context of borders, the United Nations Special Rapporteur on contemporary forms of racism E. Tendayi Achiume underscores “how digital technologies are being deployed to advance the xenophobic and racially discriminatory ideologies that have become so prevalent, in part due to widespread perceptions of refugees and migrants as per se threats to national security.” She adds:
“In other cases, discrimination and exclusion occur in the absence of explicit animus, but as a result of the pursuit of bureaucratic and humanitarian efficiency without the necessary human rights safeguards.”
Technology, then, is also becoming a feature of immigration procedures, based on arguments of efficiency and increased objectivity, while implicitly reinforcing the image of certain types of migrants as inherent untrustworthy or high risk.
When an asylum seeker does not have a valid ID, a voice recording of the person describing a picture in their mother tongue is analysed by software to evaluate their dialect
In Germany, AlgorithmWatch reports that the Federal Office for Migration and Refugees (BAMF) employs automated text and speech recognition systems in asylum proceedings: “Agency employees can ask asylum seekers to give them access to their cell phone, tablet or laptop to verify they are telling the truth about where they come from”, running software on the data to extract from the devices. When an asylum seeker does not have a valid ID, a voice recording of the person describing a picture in their mother tongue is analysed by software to evaluate their dialect. In June 2021, a regional court ruled that the searching of asylum seekers’ phones was unlawful.
In the United Kingdom, in 2019 the Joint Council for the Welfare of Immigrants (JCWI) and Foxglove launched a legal case challenging the discriminatory nature of the secretive visa algorithms used by the UK Home Office, arguing they created three separate streams or channels for applicants, whereby applications from people of certain nationalities received a higher risk rating and were much more likely to be refused. They alleged that this type of risk streaming resulted in racial discrimination and violated the 2010 Equality Act. In August 2020, the Home Secretary announced plans to end the use of the streaming algorithm, and to do a full review of the system.
In 2022, the EU’s new travel authorisation system for people of nationalities currently not requiring a visa to enter the Schengen area, ETIAS, will come into force. It will be the newest addition to a motley family of EU information systems that the EU plans to make more “interoperable” under 2019 regulations intending to improve the EU’s ability to tackle irregular migration and crime. This entire framework, whose centralised databased will have the capacity to house 300 million records, is built on the foundational assumptions that non-Europeans must be systematically screened as threats to security. A 2021 Frontex report makes clear that ETIAS – a database with the personal information of foreigners coming to Europe for things like weddings, business meetings, and vacation – is effectively part of the EU’s security apparatus:
“ETIAS enables the collection of information on people travelling visa-free to the EU in order to deny individuals travelling within the Schengen area who pose a security risk. This is a centralised EU system to issue travel authorisations that enhances external and internal security of the EU.”
The challenges – and opportunities – ahead
On 21 April 2021, the European Commission announced its proposal for a new regulation on artificial intelligence (the AI Act), billing it as a set of rules to promote “excellence and trust.” While the proposed AI Act reflects the hard work of civil society to ban some unacceptable uses of AI, and recognise certain uses of AI as presenting a “high risk” to fundamental rights and safety – including several in the context of migration and asylum – it focuses on mostly technical fixes to those uses, not on identifying or mitigating their harmful impact on certain groups and communities. Strikingly, application of the regulation to the EU’s migration databases is explicitly exempted.
The AI Act is yet another example of how the EU’s aspirations for leadership in tech and privacy governance are marred by its expenditure of a stunning level of resources and political attention to target non-EU citizens
The AI Act is yet another example of how the EU’s aspirations for leadership in tech and privacy governance are marred by its expenditure of a stunning level of resources and political attention to target non-EU citizens for ever-more invasive and harmful forms of surveillance, monitoring and policing.
The AI Act does have a good news story, however. A large coalition of diverse organisations – variously specialised in migrants’ rights, patients’ rights, disability rights, digital rights, among others – has emerged to advocate around it, and to press collectively – as well as individually – for significant revisions to the proposal. In November 2021, 120 organisations joined a call for changes that would bring it into line with fundamental rights.
In Germany, a coalition including partners as diverse as Doctors of the World and Society for Civil Rights (Gesellschaft für Freiheitsrechte), a legal advocacy organisation specialised in privacy rights, joined forces to advocate against policing of migrants that goes beyond migration policy. Their campaign, called Gleich Behandeln (“Treat Equally”), targets section 87 of Germany’s Residence Act on the “transfer of data and information to foreigners’ authorities”, which obliges public authorities to report undocumented people they come in contact with to the immigration authorities. The result is that people with irregular migration status in Germany face immigration control consequences if they try to obtain health care coverage to which they are entitled. In 2021, following this campaign, the new German coalition government adopted a coalition agreement with a pledge to lift these obligations.
Such large and diverse coalitions are critical to influence the issue of technology in ways that are discriminatory and harmful
Such large and diverse coalitions are critical to influence the issue of technology in ways that are discriminatory and harmful. Building them requires advocates to step out of comfort zones to develop the knowledge, relationships and trust to leverage the efforts and insights of advocates working in the areas of migrants’ rights, broad human rights, digital rights, anti-racism, and prison abolition, among others. Because while it’s important to mitigate the damaging uses of technologies, we must keep sight of the fact that the fundamental problem is its deployment to advance an already problematic and harmful agenda.
Alyna Smith was one of the interveners of our second Digital Rights for All workshop, “Fighting back against the digital policing of migration”. She discussed the digital criminalisation of migration and the threats it poses to human rights. This blog post is based on and expands upon her presentation during the workshop.