Tackling the Impact Measurement Challenge

By Patrick Regan, 22nd July 2020

A blurry sweep of blue and white lines.

In early June 2020, a number of DFF grantees came together virtually for DFF’s first ever “outcomes harvesting” workshop.

This marked the next step in DFF’s plans to pilot a new framework for monitoring the impact of strategic litigation to advance digital rights. But what is “outcomes harvesting”? What is this new framework? Why is it important?

Measuring the impact of strategic litigation is no easy feat.

Litigation can take years, the context is often complex, and there are many factors outside our control which could influence the effectiveness of the litigation. Often, we have only limited (if any) access to key decision makers involved to gain insight into how law, policy or practice has actually changed.

This makes it challenging not only to identify impact, but also to assess your contribution when there is impact.  This is compounded by the lack of readily available evaluation tools and methodologies tailored to strategic litigation.

Many NGOs and litigators do not have the time or resources to reflect, evaluate and collect robust evidence to learn from their advocacy activities. However, measuring and demonstrating the impact of litigation is imperative for the digital rights field to work more efficiently and effectively, and attract support for this type of work. So how can we overcome these challenges?

I would be embarrassed to admit the number of times I lay awake pondering the different ways to solve this problem. Thankfully, my pondering was not in vain. In early 2019, I found out that DFF had noticed the same issues related to measuring the impact in strategic litigation, and decided to try and do something about it. This served as a jumping off point to develop an exciting (if you are an evaluation nerd like me) new framework with DFF to help measure the impact of strategic litigation in digital rights.  

Developing a Pilot Framework

DFF wanted to develop something that was specific to strategic litigation in the digital rights field, but which could be easily implemented and adapted for different organisations. It was imperative to strike the right balance between something that had enough rigour to be reliable, but also light enough that it would not demand too many (already scarce) resources to use.

Building on my own previous experiences of adapting tools for strategic litigation impact assessment, reviewing available literature and case studies, and talking to other organisations in the field to check relevance and draw inspiration, we developed a framework based on three key components:

  • A thematic framework of outcome themes and impact types to help guide the process of monitoring, identifying and analysing outcomes.
  • A methodology for capturing “outcome statements” (which takes inspiration from a methodology called “outcomes harvesting”).
  • A set of evidence principles to add a lawyer of rigour and quality control.

The framework was published in December 2019 under a creative commons license so it can be easily shared and adapted.

Putting the Framework into Action

On 10 and 15 June 2020, eight DFF grantees came together for a workshop to learn about the framework, marking the next step in DFF’s plans to pilot the framework and put it into practice.

During the first part of the workshop, we discussed some common challenges in measuring impact in strategic litigation and digital rights. This was followed by a short introductory training on the outcome harvesting methodology. 

The group raised some important challenges and questions. How can you measure and understand how different groups and communities are impacted by the litigation? What impact does the litigation have when the state in question does not implement a judgment? How can you attribute your work to higher level impacts when so many others are involved? How can we communicate the impact of strategic litigation to a wider audience when both the process and the issues litigated are complex and technical?

How can we communicate the impact of strategic litigation to a wider audience when both the process and the issues litigated are complex and technical?

The framework seeks to respond to some of these challenges. It provides a tool to help document and identify key outcomes and changes, and helps to identify your contribution to these changes. It also gives a structured way to capture learnings, unexpected outcomes and to seek out negative outcomes so we can better mitigate against them in the future.

It’s Not All About the Judgment

The direct legal outcomes of strategic litigation are often significant and important. For example, obtaining a positive judgment that establishes precedent and new case law, or redress for the claimants.

The framework also encourages you to think about the other ways in which litigation may be generating positive results and changes

However, the framework we developed also encourages you to think about the other ways in which litigation may be generating positive results and changes, at all stages of the litigation process. This could be by increasing public awareness or changing public perception on an issue; changing the way a certain topic is presented in the media; or even the impact that the act of taking the case might have on your own organisation, network or the wider field.

“Sometimes we can focus too much on the bigger, longer term aims” commented one participant. This was an important point of discussion during the workshop. Another workshop participant commented on the approach that “it’s a good way to start thinking outside of the box in terms of impacts, to think about all of the things and areas your litigation might effect”.

The framework encourages anyone using it to consider these different layers of outcomes and to appreciate their value and contribution to other, broader outcomes and impacts.

Evidence Standards

During the workshop we also discussed data and evidence quality, considering what was both realistic to collect and credible enough to be used in measuring outcomes. The group discussed some key questions to help think through evidence quality, such as:

  • Is the outcome you identified, and your contribution towards it, proportional to your evidence? (i.e. is your contribution claim realistic based on the evidence you have?)
  • Is the contribution you are claiming proportional to the size, scale and timing of your litigation/advocacy?
  • Has the evidence been peer reviewed or independently verified by anyone else?
  • Are the voices of those we claim to have had an impact on represented in the evidence?

The pilot framework has been designed so that organisations can use the data they might already capture, their existing monitoring or evaluation systems, and also anecdotal information that they have access to. It proposes a set of evidence principles to help assess the reliability of the evidence and outcomes you have identified.

Putting the Framework to the Test

After the first workshop, the group then put the methodology into practice by identifying and preparing a number of outcomes concerning their litigation projects. During the second part of the workshop, the group shared some of their outcomes as well as their reflections on using the methodology. We also shared and reflected on some of the challenges, successes and learnings when engaging in digital rights litigation.

The group “harvested” over 30 different outcomes which ranged from establishing important and novel legal precedents and influencing the way in which certain topics are portrayed in the media, to more organisational and field-related outcomes, such as prompting others to engage in legal action or developing and applying knowledge gained through their pre-litigation research to improve the quality and reach of other advocacy campaigns. 

They thought about what “impact” would actually look like in reality and what changes they might be able to observe

Some organisations who were at an earlier stage of their litigation used the method to help identify and articulate their desired outcomes. They thought about what “impact” would actually look like in reality and what changes they might be able to observe. This helped them to plan what kind of evidence or data they would need to collect to know if this change has happened.

The framework and methodology is of course only one potential solution. It makes use of a number of different research and evaluation principles that can be used and adapted flexibly in a variety of situations and contexts. As one participant commented, “it seems like a useful way to understand and develop an idea of what your role is in a bigger change, and what part the litigation played”. 

DFF will continue to pilot the framework through its grantmaking and by holding more outcomes workshops. If you would like to know more about the framework, or attend a future DFF outcomes workshop, please email DFF’s Programme Officer.

Patrick Regan is an independent evaluation consultant specialising in the evaluation of projects concerning human rights and which use litigation/the law to drive social change. His consultancy, the Rights Evaluation Studio, provides a range of evaluation and project design services.

Taking Police Tech to Court

By Jonathan McCully, 22nd July 2020

Police officers in riot gear look out across an intersection.

Last month saw some significant milestones in the fight against digital and data-driven tools used in policing.

At the end of June, a bill was introduced to the US Congress calling for a moratorium on facial recognition and biometric technology, while Santa Cruz and Boston joined the growing list of US cities to ban police and public authority use of facial recognition. Santa Cruz went even further, as the first US city to outright ban the use of predictive policing technology.

Last month, the Association for Computing Machinery (ACM) also added their voice, calling on lawmakers to suspend the use of facial recognition that can be prejudicial to “human and legal rights.”

In Europe, the European Data Protection Supervisor called on a suspension of automated recognition technologies, including those that measure “gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals.”

In the UK, hearings commenced in Liberty’s appeal in their legal challenge to the use of facial recognition technology by South Wales Police. 

These actions follow the move by IBM, Microsoft, and Amazon to restrict, to varying degrees, their facial recognition businesses. They also come off the back of years of tireless campaigning and research on facial recognition by activists and human rights organisations, including the groundbreaking work of Algorithm Justice League’s Joy Buolamwini, Timnit Gebru, and Inioluwa Deborah Raji.

This includes ending the use of technologies that embed, entrench and exacerbate police discrimination and violate the rights of individuals

These milestones have come at an incredibly significant time, with police brutality, abuse, and systemic racism dominating public discussion and protests across the globe.

The Black Lives Matter message has been resolute and urgent: all societies need to consider reforming, defunding, dismantling or abolishing their law enforcement institutions in their current form to achieve justice and foster healing. This includes ending the use of technologies that embed, entrench and exacerbate police discrimination and violate the rights of individuals against whom these technologies are targeted and used.

Taking Police Tech to Court

Strategic litigation can be an empowering tool to challenge police measures, policy and law, and it is a crucial mechanism by which law enforcement can be held to account for their actions.

The European Court of Human Rights (ECtHR), for example, has a substantial body of case law dealing with issues such as the failure to investigate racist or discriminatory police conduct, excessive use of force by police, interrogation techniques, and the policing of public assemblies.

More cases are also coming before the courts and regulatory bodies challenging the rise of Big Data policing. Even over the last year and a half, a number of significant decisions have been handed down in Europe that set important precedent around police use of new technologies and data-driven techniques.

There is much work still to be done, particularly in pushing for decisions that take a more intersectional approach to assessing these measures, but these cases offer some hope around the potential for strategic litigation to set necessary limits on police tech.

A group of police officers in riot gear make their way across a street.

Law Enforcement: Not Above the Law

Courts and independent oversight bodies perform an essential role in enforcing existing laws against the police. In some areas, strong regulations that restrict police use of data-driven techniques already exist.

One prominent example is the EU Police Directive (Directive (EU) 2016/680), which establishes data protection principles and standards for authorities responsible for preventing, investigating, detecting and prosecuting crimes. It has already been applied by courts to find certain police data sharing practices unlawful.

Earlier this year, the UK Supreme Court delivered its first judgment applying the national law implementing the Police Directive in a case concerning international law enforcement information sharing.

The case, Elgizouli v. Secretary of State for the Home Department, concerned an individual who had been accused of being a member of a terrorist group in Syria. This group had allegedly been involved in the murder of US and British soldiers.

While investigating this group, the US had sought, through a “mutual legal assistance” request, evidence related to the individual that had been previously obtained by British authorities.

Without obtaining assurances from the US that this evidence would not be used in a prosecution that could lead to the imposition of the death penalty, the Home Secretary agreed to provide the information. One of the arguments raised in the case was whether this international transfer of data violated the Data Protection Act as it applied to law enforcement authorities. The UK Supreme Court found it did, reasoning that the Home Secretary had failed to address the specific requirements of the Data Protection Act. Instead, the information was transferred based on “political expediency” rather than consideration of what the legislation permitted or required of law enforcement.

Police services have also been held to account by Data Protection Authorities for their failure to comply with data protection law

Police services have also been held to account by Data Protection Authorities for their failure to comply with data protection law.

Last year, the UK’s Information Commissioner’s Office (ICO) issued an enforcement notice against the Metropolitan Police Service for their failure to respect the right of individuals to access information held about them by the police force. More specifically, the Metropolitan Police Service had been asked to address the significant backlog they had in subject access requests. While in Germany, fines have been imposed against individual police officers for abusing their position by using data from their work to make private contact with individuals.

Procedural Safeguards

Technological developments have diversified the intrusive means by which the police can observe, surveil, and search.

In the US, groundbreaking cases have been brought before the Supreme Court clarifying that the definition of “search” under the two-century-old Fourth Amendment extends to thermal scanning, the tracking of an electronic beeper, and accessing historical cellphone location records.

In doing so, these cases have affirmed that a search warrant is required before these measures can be adopted – a procedural safeguard that can help protect individuals against abusive or arbitrary uses of these technologies.

Similar cases can be seen in European courts. A recent example was brought only a few months ago by La Quadrature du Net and the Human Rights League in France. The case concerned the use of drones by the Parisian police to enforce the country’s COVID-19 lockdown measures.

These drones were fitted with zoom enabled cameras and loudspeakers

These drones were fitted with zoom enabled cameras and loudspeakers. According to the police, however, they did not intend to use the zoom function, nor would they use the drones to identify specific individuals. Instead, the cameras would be flown at 80 to 100 metres altitude, use wide-angle lenses, and would not contain memory cards. The purpose of the drones was to identify “public gatherings” by transmitting images in real time to a command centre, from where they could deploy police officers to disperse the gatherings.

The Conseil d’État found that the use of these drones engaged data protection law. The drone operators had the ability to use the optical zoom and could fly the drone lower than the designated height, meaning that the drones were likely to be used to process identifying data of individuals.

The court agreed that the purpose behind the use of the drones, namely the protection of the public, was legitimate. However, according to French law, the use of these drones required prior ministerial authorisation taken after a “reasoned and published opinion” of the National Commission for Data Protection (CNIL).

Alternatively, in the absence of such authorisation, the drones would need to be fitted with technical measures that would make it impossible for police to identify individuals when using them. In short, the police could no longer continue to use the drones without first obtaining official authorisation.

A group of police officers in riot gear standing on a street

Databases and Watchlists

Police collection and retention of personal data, including in databases and watchlists, continues to be an issue that is frequently litigated before European courts.

In the last eighteen months, the ECtHR has handed down a number of important judgments that found violations of the right to privacy due to police data retention practices.

In Catt v. the United Kingdom, the ECtHR criticised the UK police’s retention of a peace campaigner’s data in an “extremism database.”

The data included information such as his name, address, date of birth and presence at demonstrations, including political and trade union events that revealed his political opinions.

The ECtHR found the retention unnecessary. More particularly, it found fault with the fact that the data could have been retained indefinitely and the retention lacked adequate safeguards.

Furthermore, the authorities failed to comply with their own definition of “extremism” by retaining the campaigner’s data in the database, and they failed to take into account the heightened protection that should be given to political opinions.

Earlier this year, in Gaughran v. the United Kingdom, the ECtHR found a violation of the right to privacy where, following a conviction for a minor offence, an individual had his DNA profile, fingerprints and custody photograph retained indefinitely in a local database to be used by police. The ECtHR found this data retention regime to be disproportionate in light of, among other things, its indiscriminate nature and the lack of any real possibility for the data retention to be reviewed.

Following a conviction for a minor offence, an individual had his DNA profile, fingerprints and custody photograph retained indefinitely

This is a significant judgment from the ECtHR, as it appeared to be the first time it had referred to facial recognition technology in its substantive assessment of a case.

Despite the fact that such technology was not used in relation to the applicant’s photograph, the ECtHR took into account the fact that the “police may also apply facial recognition and facial mapping techniques to the photograph” in finding an interference. This was a notable change of approach compared to the decisions that were handed down by the European Commission of Human Rights in the 70s, 80s and 90s, where it reasoned that retention of custody photographs would not amount to an interference with the right to privacy.

Also this year, in Trajkovski and Chipovski v. North Macedonia, the ECtHR condemned police retention of DNA profiles of convicted individuals. Applying its previous case law on police retention of DNA information, the ECtHR found the blanket and indiscriminate nature of the police powers to retain convicted individuals’ DNA profiles, coupled with the lack of sufficient safeguards, amounted to a violation of the right to privacy.

In another case from this year, the ECtHR recognised that police retention of palm prints draws the same human rights considerations as the police retention of fingerprints. The retention of data in police databases is an area of ECtHR jurisprudence that is likely to grow over the coming years, with 11 pending cases before the court dealing with the retention of criminal records.

Room for Improvement

These cases are promising indications of the role courts can play in reigning in Big Data policing, particularly where it concerns disproportionate intrusions upon the private lives of individuals.

These cases are promising indications of the role courts can play in reigning in Big Data policing

Nonetheless, there have been relatively few cases in Europe that take a truly intersectional approach to challenging police tech.

For instance, a vast majority of court decisions concerning police measures that surveil and intrude upon individual privacy do not properly consider how those measures can often be targeted at, have a disproportionate impact on, or profile certain individuals based on their ethnicity, nationality, religion, language, race or socio-economic status. In 2019, for instance, the High Court of England and Wales did not appear to fully and meaningfully engage with the discrimination arguments raised with regard to the South Wales Police’s deployment of facial recognition technology.

There is much more left to be said by the courts in relation to police tech. Now is the time for litigation to be taken and for the courts to properly take account of the entire spectrum of human rights concerns raised by Big Data policing, including those related to the discriminatory impact of new technologies.

Photos by ev on Unsplash

UN Special Rapporteur Warns of Racial Discrimination Exacerbated by Technology

By Nani Jansen Reventlow, 15th July 2020

The Digital Freedom Fund welcomes the publication of the report “Racial discrimination and emerging digital technologies: a human rights analysis” by the UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume.  

The report goes beyond analysing specific instances of racial discrimination on digital platforms to assess how emerging digital technologies perpetuate racial discrimination on a structural level. Examining the racially discriminatory consequences of algorithmic decision-making throughout public and private life, the report sheds light on how existing inequalities, biases, and assumptions in the design and use of digital technologies threaten the human rights of marginalised and racialised groups. The report also outlines a structural and intersectional human rights law approach to regulating digital technologies, emphasising that an effective response to racial discrimination must include efforts to break down power structures within the sectors that make decisions about the design and use of these technologies, including private technology companies, research institutions, and public regulatory bodies. 

The report goes beyond analysing specific instances of racial discrimination on digital platforms to assess how emerging digital technologies perpetuate racial discrimination on a structural level.

A cohort of digital rights NGOs, including DFF, have released a statement highlighting a number of salient points in the Special Rapporteur’s report. The statement – which is open to additional signatories – expands on the human rights framework set out in the report, drawing attention to several specific commitments which state and corporate actors must uphold to prevent and remedy the discriminatory impacts of emerging digital technologies. 

The NGOs assert that certain technologies should be banned outright. Incremental regulation, they argue, is not appropriate for technologies such as facial recognition and predictive analytics that are demonstrably likely to cause racially discriminatory harm. The statement also emphasises the importance of keeping access to technology at the forefront of dialogues about racial discrimination in the design of digital technologies. The digital divide between the global South and the global North will perpetuate and deepen existing global inequalities as societies increasingly rely on digital platforms to distribute public goods, medical care, and education. The digital divide is not limited to less-resourced countries, either. In the US, for example, lack of basic internet access falls disproportionately on Black, Latino, and American Indian communities. 

The NGOs also elaborate on the Special Rapporteur’s account of how the values and practices of the technology field must change in order to ensure that digital tools do not replicate racially discriminatory structures. The statement echoes the report’s challenge against Silicon Valley “technochauvinism,” or the idea that technology is the best solution to social problems, and expands on the importance of including people who have experienced the impact of racially discriminatory technology in the design process and compensating them for their contributions. In discussing racial equality data, the NGOs took a step beyond exploring ways to dismantle structural racism in the technology industry to examine how even efforts to combat digital discrimination have the potential to perpetuate racial hierarchies. 

While the Special Rapporteur advocates for data collection to help address racial inequities, the NGOs highlight the risk this could pose for already marginalised populations. The NGOs would nonetheless welcome, and gladly participate in, an effort to develop standards for non-extractive data collection and governance, including measures to ensure that data collection, analysis, interpretation, and dissemination do not reinforce existing racial and other hierarchies as well as address the power dynamics between data collectors and those whose data is collected.

The UN Special Rapporteur’s report outlines critical warnings about the racially discriminatory potential of technology, which is particularly urgent in the wake of the coronavirus

The UN Special Rapporteur’s report outlines critical warnings about the racially discriminatory potential of technology, which is particularly urgent in the wake of the coronavirus as governments around the world launch digital interventions in the name of the public good. DFF hopes that the report will receive broad support from civil society and that it will prompt much-needed further conversations on the issues addressed.

These conversations should include reflections on the makeup of the digital rights field itself, which, in DFF’s view, is currently too embedded in the power structures that enable the practices addressed by the Special Rapporteur to adequately fight them. We at DFF, together with EDRi, have initiated a decolonising process which aims to examine and tackle this issue. To learn more and find out how to get involved, visit the DFF website.

Update (September 2020): 80 NGOs and 55 individuals have joined the Digital Freedom Fund, Access Now, AI Now Institute, Amnesty International, Association for Progressive Communications and Internet Sans Frontières in signing on to the statement in support of the UN Special Rapporteur’s report. For the full statement and list of signatories, see here. DFF is grateful to Jessica Fjeld and Vivek Krishnamurty for their leadership in drafting the statement.

Photo by Bryan Colosky on Unsplash