This year, World Refugee Day comes during a devastating global pandemic that has closed borders around the world.
The current reality makes it even more difficult for refugees to flee to safety and seek asylum. We are also witnessing protests and community action against racial structural violence worldwide, bringing to the surface a number of disturbing trends in technology and policies in border surveillance.
One of the successes in this regard that has been widely reported in the media lately, due in no small part to years of civil society campaigns, is a moratorium on the sale of facial recognition technology to law enforcement in the United States. Research has shown a widespread racial and gender bias in these intrusive technologies, and the moratorium is a welcome move towards stopping discriminatory practices and abuse of data by law enforcement.
However, they are still being used to monitor the movements of refugees and migrants who don’t have the same protections and access to legal mechanisms as citizens.
Almost immediately, the COVID-19 pandemic became a justification for governments to use more intrusive surveillance technologies on refugees and migrants
Almost immediately, the COVID-19 pandemic became a justification for governments to use more intrusive surveillance technologies on refugees and migrants, in some cases forcing them to wear wristbands that track location and movement. Biometrics have become commonly used in the aid sector for both registration of refugees and access to aid and services, among UN agencies, NGOs, border control, and private sector contractors.
The widespread application of biometrics is justified by UN agencies like the World Food Program (WFP) as a solution to the very non-existent problem of identity theft among aid recipients, and embezzlement of aid, which is largely done by aid providers, not recipients.
Such a large-scale reliance on these technologies has reduced human bodies into evidence. As a result, and in an effort to regain some control over their bodies and their data, there have been a number of disturbing reports about refugees burning off their own fingerprints out of fear of being tracked and returned to countries of origin, or to entry-point countries in the EU due to the Dublin Regulation.
…there have been a number of disturbing reports about refugees burning off their own fingerprints
Like facial recognition technology, fingerprint readers were built on racialised assumptions of users, and only have high accuracy with middle aged men with lighter skin tones. There has been a pattern of centralised collection and storage of this biometric and personal data across sectors and agencies, seen with public-private partnerships like Palantir-WFP, Microsoft-International Committee for the Red Cross, and Facebook Libra-Mercy Corps.
Even though these organisations claim the data sharing partnerships are making their operations easier, it comes at the cost of losing the trust and confidence from the refugee communities.
Lastly, the target of migration surveillance has shifted to the people crossing into bordering countries, and to monitoring the human rights organisations who work with refugees. As facial recognition technology and other biometric tracking has become normalised through public-private partnerships, we’re seeing a bigger trend in cross-border surveillance of migrants and refugees.
Outsourcing surveillance of migrants has become the norm
Outsourcing surveillance of migrants has become the norm in Australia, United States, and the EU, done through data extraction companies like Cellebrite, which bypass passwords to track location and movement through digital devices. Furthermore, agencies like EUROSUR that are tasked with monitoring borders have shifted their focus to surveillance of civil society organisations that conduct search and rescue at sea, as well as journalists and human rights researchers that document abuse and harassment of asylum seekers by border control agents in the US.
These trends in humanitarian “technosolutionism”, cross-border surveillance of migrants and refugees, and use of their data without consent are deeply troubling, and evident across sectors: at UN agencies, border control agencies, and non-government organisations.
With no one to turn to for protection and no way to access their own data, these stakeholders seem to be sending the message to refugees that they are too untrustworthy to get information from them directly and with informed consent.
It’s therefore critical that in this political moment of reimagining and restructuring our systems, we include the needs and the rights of refugees, because the overreach of intrusive technologies always starts with the most marginalised, invisible communities before it is normalised and eventually used widely on others.
Dragana Kaurin is a human rights researcher and Executive Director of Localization Lab.