Future-proofing our digital rights through strategic litigation: how can we win?

By Jelle Klaas, 7th November 2018

In September 2018, I was privileged to take part in the wonderful DFF workshop “Future-proofing our digital rights” in Berlin. With great participants from across Europe – lawyers, legal scholars and activists for digital rights – we explored the near and not-so-near future of digital rights.

One part of the workshop was looking at the future from a more positive perspective (“what would make the digital future bright and full of rainbows”). The other part was about scoping the risks and dangers (“what would make the digital future dark and full of robots”). Read more about this on the DFF blog.

The breakout session I want to tell you about was the session titled “what can we win now?”

The idea for this session was to scope out a few litigation ideas on what could be the digital rights equivalent of a landmark case such as the U.S. Supreme Court examples of Rosa Parks or a Brown v. Board of Education. Such a case wouldn’t immediately bring us a better (digital) world (that, unfortunately, cannot be achieved through litigation alone), but it could strengthen the campaign for such a world and could give the digital rights movement further inspiration and momentum.

Our small breakout group first debated the difference between digital rights issues on the one hand, and more traditional civil rights issues like racism and segregation on the other.

In situations like Brown v. Board of Education or the action by Rosa Parks, a court case that showed the daily practice of structural racism and segregation and its injustice was needed to set change in motion. However, it would appear that many digital rights issues are more abstract to the general public than, for instance, the segregation policy of the U.S. in the 1950’s. It can, therefore, be more difficult to show why violations of these digital rights matter and really hurt people in their everyday lives.

With this in mind, we thought that bringing cases that seem absurd and unrealistic at first, but are nonetheless true, can be one litigation tactic that can be used to bring home the extent to which digital rights are under attack. Cases that convey the feeling: “this could also happen to me.” And cases that almost everyone can relate to and feel threatened by.

We decided to focus on cases regarding the use of algorithms and decisions by computers/artificial intelligence.

We came up with four possible scenarios of cases that could help us “win”:

  • A baby profiled as a terrorist. Unfortunately, we already know of one baby that was profiled based on an algorithm and whose diapers were searched by border police. This is of course absurd (babies do not have detectable political views, and even if they had, they could not act upon them), but we discussed that perhaps not everyone could relate to this (“I have no ‘risky’ name and/or flight patterns, so this does not concern my children”);
  • A person who is charged more to buy a car or get a service, because of their data profile. Quite some research has already been done that shows that algorithms (often containing a fair number of errors) resulted in people wrongly being denied services or credit. A case where a biased algorithm charges more for a service or product (that has nothing to do with the bias) could be a good case to show that algorithmic decision-making can affect everyone.
  • A child taken away from its parents by child care authorities based on data analysis. To detect whether children are at risk of abuse or in an unsafe situation, child protection agencies are making use of algorithms. There have already been incidents reported where children have been taken away from parents (partly) based on biased algorithms. Most people would probably accept that decisions regarding children being taken from their families is one field in which a fully human, contextual viewpoint is needed and where the margin of error should be low. Litigation around an awful situation that shows how this can go wrong could be very powerful.
  • Essential medical supplies not being delivered to a household because the area is deemed off limits. We have already seen this happen: some areas, with a bad reputation or poorer population, receive worse or no delivery of products by some companies. This policy can stem from algorithms based on discriminatory data. A possible litigation strategy would be to go for a house right next to a poor area, to show the spillover effects of such algorithms’ decision-making.

Interesting raw first ideas, in my opinion, that could be expanded upon and further explored.

The workshop reminded us that we have to prepare for the future. Proactively thinking about campaigning and strategic litigation, not just as a reaction to a new harmful practice, is a very good way to do so. Creating or finding the case or incident that can really grab the attention of a larger group of people, outside the digital rights community, could help build a stronger, more cross-disciplinary movement or make these already existing movements bigger and broader.

The digital rights movement is already very aware of the digital rights violations in the present. It could also prepare to find, amplify and effectively respond to incidents in the future that, for instance, warn the general public about algorithms and computer decisions.

About the author: Jelle Klaas is the litigation director of the Nederlands Juristin Comité voor de Mensenrechten (NJCM — Dutch Legal Committee for Human Rights) and responsible for their Public Interest Litigation Project (PILP).