An Important New Node in the Digital Rights Network

By Wouter de Iongh, 7th September 2020

The external evaluation of DFF’s pilot phase 2018-2020 shows its relevance and effectiveness while identifying ways to further enhance its functionality to the digital rights community.

The Digital Freedom Fund (DFF) has existed since 2017, with its first 3-year strategy going into effect in 2018. During this period, DFF has attracted funding, built a team and organisation, has put in place systems to process grant requests, some of which have already led to high-profile rulings, and has organised a variety of meetings with stakeholders in the field of digital human rights. As its strategy 2018-2020 is approaching its end, DFF – supported by its seed funders – commissioned an independent external evaluation of what has been referred to as DFF‘s “pilot phase”.

The evaluation was conducted by a team of senior consultants from the non-profit consultancy ODS, between February and June 2020 through a combination of desk research, meetings with over 50 stakeholders including DFF’s leadership, grantees, meeting participants and external experts, and round tables with the DFF team and its donors.

The evaluation was conducted by a team of senior consultants from the non-profit consultancy ODS

The evaluation focused on whether DFF’s Theory of Change (ToC) was (still) valid and relevant to the digital rights community. In order to assess that, the evaluators tested the theory and reviewed how far DFF has progressed towards the impact it aims to achieve, and tracked how effective and efficient DFF has been in putting this theory into practice. This resulted in an analysis of the current situation as well as a set of strategic and operational recommendations for DFF’s next phase and strategic period.

In its ToC, DFF has as its overarching objective to “further human rights in digital and networked spaces by increasing the number of successful strategic litigation cases and by supporting the contribution of such cases to wider public debates, policies and practices”. In addition, the ToC holds that a condition for success in achieving that goal is that actors involved in strategic litigation should be sustainable and stronger.

This leads DFF to work towards these goals through two broad workstreams: litigation support, consisting of providing grants to strategic litigation projects and the promotion of pro-bono support, and field building, which involves capacity building, networking, knowledge sharing and strategy development.

The evaluation looked at DFF’s positioning in the field of digital rights, the consistency and balance within its ToC, and the strength of the ToC. The assessment of the evaluators is that DFF is very strong in all these areas.

First, the combination of grantmaking and fieldbuilding with strategic litigation is unique in the field of digital rights and DFF is seen as an important added value to the field. This can be attributed to the strength of the ToC but also to DFF’s practice of continuous horizon scanning and consultation with a diverse group of actors, which allows it to adapt to changing circumstances.

…the combination of grantmaking and fieldbuilding with strategic litigation is unique in the field of digital rights

Secondly, the connection between DFF‘s interventions and goals is convincing and strong. Strategic litigation is important for furthering digital rights and was underutilised in the digital rights field, partly due to a lack of capacity and partly due to a lack of resources. DFF has worked to remedy these limitations by offering actors in the field who are interested in or embarking on strategic litigation access to funds, advice and support in the course of the grant applications, as well as opportunities to learn and network in strategic, thematic and capacity building meetings. Finally, the activities and outcomes in the ToC are also credible and sufficient to achieve the outcomes DFF works towards.

DFF has also been highly successful if measured against its own quantitative goals of providing at least 20 grants, promoting pro-bono support towards at least 75% of applicants and a satisfaction of at least 75% of participants in field building activities. While more difficult to measure, the evaluation also found that the more qualitative metrics around increases in the quality of applications, more collaboration on these applications and more strategic alignment of actors in the digital rights field, have been met as well.

These metrics themselves however, are not sufficient to understand how far DFF has progressed towards its outcomes in the ToC or towards actual societal impact. This is not a problem for now, as two to three years is too short to measure impact in most cases, and this evaluation focused on the conditions for success and progress towards impact. A next strategy, however, would need to bring the metrics in line with the ToC, which is something DFF is in fact working on together with a Monitoring & Evaluation expert.

In providing strategic litigation support, and in the grant application process in particular, DFF has encountered some pushback from the field which was confirmed during the interviews. While all interviewees appreciated the professionalism of and support from DFF’s team and the clarity of its communications, they also indicated that the application process was too intensive and came at too high a cost in relation to the size of the eventual grant. To understand this view, the evaluators traced DFF’s grantmaking process from:

  • communication of the aims and conditions of DFF’s support,
  • the application requirements
  • application assessment
  • the conditions of the grant itself.

In doing so, it became clear that while minor improvements could be made to the way DFF communicates and to the application requirements and forms, these were in fact already quite streamlined, clear and, in the case of the application, reasonable in length and detail.

The way DFF assessed the applications and the limited flexibility in the size and conditions of the eventual grant, did lead to some applicants questioning whether they would apply for a grant again in the future. The application of DFF’s own grantmaking guidelines was reviewed at two levels.

One area for improvement would be to reduce the many iterations through which DFF assesses applications, with the team, a Panel of Experts, and the Board all offering their views, sometimes in multiple rounds. The other is more substantive in that the existing rules are applied quite strictly and in the same way for all applicants regardless of the type or organisation or case. This is understandable in a pilot phase as DFF needed to put in place mechanisms to ensure accountability towards its donors, and thus does not lead to a negative assessment of that pilot phase. The evaluators do think that going forward a more tailored approach would be beneficial.

One area for improvement would be to reduce the many iterations through which DFF assesses applications

On the conditions of the grant itself, the challenge has been that during the pilot phase, DFF was unable to commit to the type of longer-term grant that a case spanning multiple instances would require. In addition, a number of costs were excluded (adverse costs in particular) or limited (advocacy costs, legal fees and operational costs).

In addition, the possibility to combine more than one type of grant – single instance, research, emergency – was not widely known with grantees. Increasing the length and scope of the grant would change the cost-benefit analysis of the application and DFF is already developing procedures to do this. It is planning to add the possibility to include adverse costs under certain conditions, and will start offering grants over multiple instances (known as ‘track support’). More flexibility in the costs covered and the application process would in the view of the evaluators also offer DFF and the applicants additional leeway in making strategic decisions on timing, collaboration and jurisdiction for cases they are considering.

….the possibility to combine more than one type of grant – single instance, research, emergency – was not widely known with grantees

In its coordination and field building work, DFF has exceeded the outcomes that were expected and could reasonably have been expected, especially through the meetings it has organised. DFF’s practice of consultation on its strategies; consultation on the agenda of every meeting; its active push to broaden and deepen the group of participants in meetings; the open way in which the meetings are organised and facilitated; and the issues that are being discussed; all contributed to the evaluators’ assessment that DFF has had a positive impact on the field. This stems from an overall active strategy to broaden the field geographically, thematically and in terms of the people and groups who are given a voice in the field of digital human rights. The inclusive principles behind this approach are now being further developed through a decolonising strategy which has the potential to solidify and expand DFF’s impact in this regard.

Some elements could be further strengthened. As the demands on the field increase and DFF’s network grows, strategic meetings may need to be broken up into themes and the annual broad strategic meeting may need to become biannual, something DFF is already considering. The meeting format itself as well as the facilitation could – while generally appreciated – benefit from more diversification to better fit all participants, the specific themes discussed or the purpose of a meeting.

The inclusive principles behind this approach are now being further developed through a decolonising strategy

Regarding the use of pro bono support, this has not been taken up by the field even though DFF has promoted it consistently. This means that DFF could reconsider what it offers in order to facilitate pro or low bono support, as well as how it communicates about it towards those grantees who might benefit from it.

Finally, DFF could leverage its knowledge of the field and the issues more, to bring together different actors proactively, to support each other on issues outside of DFF’s mission, to work on themes or applications or represent the field in certain fora, or to find additional resources. DFF‘s position in between donors, digital rights community, legal experts and policy makers offers it a unique opportunity to add additional value by being more intentional in offering insights or suggestions to the field. While DFF should remain at somewhat of a distance so as not to take up the space of its grantees or partners, its role as facilitator could also be seen as a way to place partners more front and center, by asking them to develop themes or represent the field in certain fora.

Overall, the conclusion of this evaluation is unequivocally that DFF’s ToC is relevant, that the goals it set at the beginning of the pilot phase have been met, and that it has done so in an efficient way.

With that, the progress towards impact is impressive and impact is likely in the near future, while its impact on the field through its approach to field building is already noticeable. The impact of grants on specific cases is clear but the longer-term impact of these cases on policies, attitudes and case law are not yet known.

Going forward, DFF does need to evolve to remain as relevant as it is and to ensure that the field continues to be interested in applying for grants. This evolution should be geared towards remaining attractive as a grantmaker and becoming more intentional and dynamic by increasing the options it has in its grantmaking work (both in the application process and the grant itself), diversifying its meetings, and facilitating connections between actors more proactively.

DFF does need to evolve to remain as relevant as it is

As DFF is already moving ahead in all these areas, and has the appropriate systems and organisation in place, the evaluators are confident that this evolution is possible and likely. The evaluator’s strategic recommendations are targeted towards this as well:

  1. Broaden the focus on what is considered strategic in litigation to include organisational strength, timing, location, i.a.;
  2. Become more proactive in connecting digital rights actors, leveraging DFF’s unique position in the field;
  3. Roll out the decolonising approach and mainstream it in all aspects of the work and towards donors;
  4. Consider adapting the litigation support practice along three avenues:
    – Limit iterations during the application process
    – Increase the value of a grant for applicants through long-term grants covering multiple instances and funding adverse costs where necessary
    – Integrate fieldbuilding work and grantmaking to further support applicants;
  5. Experiment with additional formats, styles and subjects for DFF’s field building meetings.

The external evaluators are grateful to all who took the time to provide input, think with us and connect us to other sources, in the course of the evaluation process. We especially wish to thank the DFF team for supporting us in facilitating the data collection in the current challenging circumstances and for being open and transparent in sharing information about their work.

Wouter de Iongh is Partner at ODS, a cooperative consultancy working exclusively with non-profit causes and organisations on Organisational Development, Monitoring, Evaluation & Learning, Strategy Development & Advice and Action Research.

Why Privacy – Not Surveillance – is Security

By Maria Smith, 12th August 2020

A surveillance camera mounted on a navy wall overlooks a security box

As governments across the globe implement measures in response to coronavirus, many turn to technology.

Techno-solutionism is not new; its scale, however, is unprecedented. From wearable contact tracing devices in Singapore to mandatory selfies in Poland, surveillance-laden responses to the pandemic have ushered in vast collection of personal information, including data on location, biometrics, credit card transactions, home addresses, and occupations.

The powerful rhetoric from governments and optimistic promises from companies assume—or want us to assume—that surveillance guarantees security. But, for many people, privacy means security.

Today we see that the same companies that have reportedly sold spyware used to target human rights activists are in the business of COVID-19 contact tracing, and even the most well-intentioned jurisdictions risk prioritising short-term power flexes over the long-term security of all their residents.

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are. But that begs the question: what people?

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are.

When surveillance is carried out on a mass scale, the use of the data collected often involves the profiling and othering of certain groups and communities. At the same time, targeted surveillance, such as that of welfare recipients or protesters, often leads to over-surveillance of certain groups and communities. So it is never about knowing everything about everybody but about exerting control over certain groups more than the general population. Thus, surveillance often serves as a tool far beyond its alleged purpose.

Surveillance, at its core, is about power. The most low-tech surveillance practices, from map-making to fingerprinting, emerged from Europeans’ desire to control the populations they colonised. In India, for example, data on caste, religion, and age collected by the British were used to stoke religious tensions and solidify the caste system.

Germany’s census and citizen registration took on new missions when IBM Germany formed a technological alliance with Nazi Germany during WWII. When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine specifically designed for the function.

When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine

The first national census in the United States, in 1790, inquired about the number of free white males, free white females, other free people, and slaves in a household. Early census enumeration, scholar Simone Browne explains, fixed individuals “within a certain time and a particular space, making the census a technology that renders a population legible in [racialising] as well as gendering ways.” Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends, such as de-humanising people of colour.

Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends

The Chinese government has made mass surveillance central to repression. Human Rights Watch reports that some 13 million Muslims live in a nightmarish reality in China’s Xinjiang region, home to a combination of in-person and online surveillance, video cameras with facial recognition technology, and electronic checkpoints. The Chinese government uses information collected to determine who is detained for torturous “re-education” programs.

Historically, marginalised populations are no strangers to the outsized dangers of targeted surveillance.

The Chinese government has made mass surveillance central to repression

Abusive or extractive surveillance doesn’t necessarily begin as such, however, and the harms to particular communities are not always readily apparent.   

From forced “digital strip searches” exposing rape victims’ personal information in inappropriate ways, to “digital welfare dystopias” in which technologies are used to target, surveil, and punish the poor, modern surveillance infrastructures endanger the social, political, economic, and physical security of the least powerful communities.

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles, “open[ing] the door to discrimination and abuse, and threaten[ing] freedom of speech and association.” The use of facial recognition technologies, a touchstone of punitive policing across the globe that has proven to be deeply biased, has faced recent backlash as a tool to harm racialised and marginalised groups.  

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles

In her book, The Age of Surveillance Capitalism, Shoshana Zuboff charts the course of the expansion of surveillance over US citizens and non-citizens alike in reaction to 9/11 and in the name of security. She argues that this atmosphere, by eroding concern for privacy, allowed tech companies to emerge as powerhouses while state actors, often working together with tech companies, deployed unparalleled intrusive measures in the name of security.

Today, governments around the world are operating with expanded emergency powers to fight coronavirus, and many scholars and journalists have expressed concern that these temporary measures could become permanent.

Already, instances of officially sanctioned targeting of marginalised populations are spreading

Already, instances of officially sanctioned targeting of marginalised populations are spreading. LGBTQ people face disproportionate danger when information-gathering tactics expand, a reality of non-pandemic circumstances that has only worsened over the last few months. Women and children face especially high risk when health data is politicised, and, according to emerging studies, COVID-related discrimination disproportionately targets non-white people.      

Too often, the assessment of risks and benefits of surveillance occurs behind closed doors and without the input of the communities that have the most to lose. Or it occurs too late. A handful of European countries have recently reigned in surveillance measures initially deployed in their rush to react to coronavirus.

Norway halted the use of its COVID-19 contact tracing app, “Smittestopp,” which was used to track citizens’ locations by frequently uploading GPS coordinates to a central server. Its decision came just hours before Amnesty International released the results of its investigation into some of the most invasive contact tracing apps—among them “Smittestopp”—which have been found to imperil the privacy, and security, of hundreds of thousands of people.

Courts in France, also amid sweeping privacy concerns, temporarily banned the government’s COVID-19 surveillance drones. In the UK, the government has admitted to breaking the law with regards to privacy in its reckless deployment of a test-and-trace programme. 

In other instances, an informed assessment of risks and benefits may not take place at all. “Governments are legitimising tools of oppression as tools of public health,” explains Raman Jit Singh Chima, Asia Policy Director and Senior International Counsel at Access Now, a digital rights NGO. Russia’s latest app is reported to track migrant workers’ biometric data—and that’s just the starting point. A member of Russia’s parliament hinted at how the app might be used, explaining that “it will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems.”   

“It will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems”   

For many communities, privacy is security. Losing sight of this during any circumstances would be dangerous. Now, at the height of state and corporate power, it could put those who are already vulnerable and face discrimination at even greater risk.

 

Photo by Igor Starkov from Pexels

Privacy or Paycheck: Protecting Workers Against Surveillance

By Tatum Millet, 3rd August 2020

COVID-19 changed how the world “works.” Ordered out of offices, workers under quarantine set up shop at home and, for many, the change will outlast the pandemic. Work-from-home is here to stay. Even workers who are now returning to “normal” office life will not be returning to the same workplaces they left in March.

As offices begin to reopen, employers are instituting safety measures that could permanently change workplace culture. 

The future of work in a post-COVID world is still taking shape, but whether at home or at the office, worker privacy is poised to become a thing of the past. 

As managers fret over maximising the productivity of remote employees, workers are learning to cope with invasive technologies that track their keystrokes and monitor their screens.

Companies reopening for in-person work are rolling out employee monitoring systems using wearable technology, video surveillance, and mobile-tracking apps to contain the spread of the virus.

COVID-19 has increased the surveillance power of employers, and privacy advocates warn that technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come. 

Technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come

Yet, for many workers, particularly workers in the gig economy, constant surveillance has been the norm for years. Examining how the pandemic has further entrenched a system in which work is mediated, monitored, analysed, and optimised by algorithms—and confronting the position of workers within that system—demonstrates that protecting the digital rights of workers requires more than protecting their data. It requires addressing the imbalance of power between employers and the workers trying to scratch a living out of an algorithm. 

The lines between work and non-work have bled together over the past few decades. As platforms and employers increasingly rely on algorithms and monitoring tools to manage workers, widespread “datafication” has transformed the relationship between workers and management so that “an individual employee becomes a score on a manager’s dashboard, an upward or downward trajectory, a traceable history of tasks performed, or a log file on a company’s server.”

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential. Professor Peter Fleming, writing on contemporary management ideology, warns, “ominously, we are now permanently poised for work.” 

Schemes to algorithmically optimise productivity are affecting workers across economic sectors, from the tech employee instructed to wear a Fitbit 24/7, to the Deliveroo driver rushing to meet fine-grain performance targets and the warehouse worker monitored during bathroom breaks.

Yet, digital surveillance does not impact workers equally. Economically vulnerable workers often have no choice but to consent to invasive surveillance measures. In contrast, workers with bargaining power—often highly skilled workers, or workers with strong union support—have the power to resist. Around the world, as the gig economy continues to break down organisational networks, isolating workers in states of precarious employment, a tiered system of surveillance inequality is taking shape. 

Economically vulnerable workers often have no choice but to consent to invasive surveillance measures

In order to protect the most vulnerable workers from the exploitative practices made possible by digital technology, campaigns to protect the personal data of gig economy workers must be complemented by efforts to secure stronger labour rights.  

Recent efforts by gig economy workers to protect their employment rights demonstrate just how closely the problem of worker surveillance is entwined with broader issues of labour today.

In the UK, Uber drivers sued Uber for improperly categorising them as freelance workers instead of employees with benefits and protection. Yet, without access to Uber’s platform data, drivers cannot calculate their net earnings to ensure compliance with minimum wage protections or launch an appeal if they are fired by an algorithm. So, the drivers turned to the GDPR, and with the support of the App Drivers and Couriers Union, a number of UK Uber drivers filed a lawsuit against Uber in the district court in Amsterdam to demand access to their data, including GPS location, hours logged on, tips, and trip details. 

Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology

The Uber drivers are not just seeking more control over their data: they are seeking more control over their state of employment. As Anton Ekker, the Amsterdam privacy lawyer representing the drivers, said to the Guardian, “this is about the distribution of power.” Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology.

Crucially, worker organisation throughout the gig economy also promises to expand the impact of local and national victories against platform companies, and facilitate strategic coordination so workers can more effectively take advantage of existing legal mechanisms—such as the GDPR—that have the potential to protect their rights.  

As the COVID-19 pandemic threatens to hurl vulnerable workers into an even more precarious economic future, it is also expanding the surveillance power of employers. Protecting the digital rights of workers against this ominous “future of work” in a manner that fully addresses the state of economic inequality in the gig economy calls for more than data safeguards—it calls for empowerment.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Photo by Daria Shevtsova from Pexels