The digital rights future we want: imagining a Universal Declaration of Digital Rights

By Nani Jansen Reventlow, 15th November 2018

In a perfect world, what would you like to be able to say is true about our digital rights five, ten years from now?

With this question, we kicked off the “Future-proofing our digital rights” workshop in Berlin late September. Representatives from Liberties, Privacy International, SHARE Foundation, Polish Helsinki Foundation for Human Rights, Oxford Information Labs, Liberty, Public Interest Litigation Project, Prototype Fund, Panoptykon Foundation, Bits of Freedom, Global Partners Digital, La Quadrature du Net, and Digital Security Lab Ukraine tried to imagine what a positive future conversation about our digital rights would look like and penned down a multitude of statements, ranging from “people have more time to the creative things they care about”  and “the Internet of Things has helped to limit climate change” to “the information shared on the internet can only be deleted in line with international human rights standards”, “children are safe in the digital context” and “no ‘black boxes’ are allowed to determine people’s rights”.

Temporarily switching our focus from the digital rights battles being fought today to the future we want turned out to be an invigorating and inspiring experience, which energy fed into our own imagining of a “Universal Declaration of Digital Rights”. Starting from how rights are currently protected in our international human rights system, we looked at how these would be interpreted in the future and asked ourselves would need to be established over the coming years.

The Declaration that resulted from our collective imagination was a combination of both a re-imagined existing rights, such as fair trial rights, focused on algorithmic decision-making, or a right to “understand the implications of technology” as a manifestation of the right to education, and the formulation of new potential rights, such as a right to modify and update devices, a right to interoperability of technologies, and the right to disconnect.

DING magazine made this wonderful visualisation:

One reflection that was shared by a number of participants was how time-resistant our current human rights framework is; a frequent point of consideration was if the newly imagined rights were truly new or if they would fall within scope of the existing framework.

Of course, there are other declarations and statements of our human rights in the digital age, such as the Article 19’s proposal for a Universal Declaration of Digital Rights, the Internet Rights and Principles Coalition’s Charter of Human Rights and Principles for the Internet, or ZEIT-Stiftung’s European Digital Charter. It was interesting to see many parallels between these projects and our collective creative imagination sprint.

What do you think? Do we need a re-imagining of our human rights framework to make it fit for the digital age? Or does our current framework suffice? Get in touch to let us know!

The “right to disconnect”: to what extent will we need a right to unplug from our digital lives?

By Jonathan McCully, 9th November 2018

It seems inevitable that, in the not too distant future, we will be living astoundingly connected lives. In Europe, internet access and the frequency of internet usage continues to rise. In 2017, 72% of EU citizens were reported as having accessed the internet every day, compared to 56% in 2011. Furthermore, the use of information and communication technologies is infiltrating our lives in more and more ways: we use them to carry out our jobs, declare our taxes, shop, bank, and keep in touch with family and friends. With these trends set to continue into the future, should governments and corporations protect us against the harms that might accompany the feeling or expectation of having to be online and connected at all times?

This question was discussed at DFF’s “Future-proofing our digital rights” workshop, during which participants looked at whether and to what extent we might need a “right to disconnect” as part of our human rights framework. Before considering what a “right to disconnect” would look like, the group considered the potential harms such a right would seek to protect us against.

At one level, the “right to disconnect” could be seen as a means of protecting personal data by providing individuals with control over when and under what circumstances their digital devices are connected and therefore collecting and potentially transmitting data about them. At another level, the right could be viewed as a means of ensuring respect for individual autonomy and self-determination by allowing individuals to live out their lives without having to be “online” and connected (i.e. a right to be a “digital hermit”). It could also be viewed as a right aimed at protecting the physical and mental integrity of a person. Some studies have shown greater levels of reported stress in those who feel the need to constantly check their emails, texts, or social media accounts, while other studies have shown a link between overuse of some technologies, such as social media sites and entertainment-on-demand, and poor physical and mental health. That being said, it still remains unclear to what extent we are truly harmed by being constantly or regularly connected.

One area in which the harms of being constantly connected have been explored more deeply is in the employment context. The pressure on employees to check emails outside of work has been linked to burn out, poor emotional well-being and strain on personal relationships and the feeling of having to check emails outside of working hours can have a negative impact on an employee’s health and well-being. So, should the “right to disconnect” be formulated as a right under employment law, i.e. a right that permits employees to log off from their work devices when outside working hours?

France has led the way on this interpretation of the “right to disconnect” (le droit à la déconnexion). In 2016, the French Labour Code was amended to require employers to implement “procedures for the full exercise by the employee of his/her right to disconnect” and to “establish[] . . . control mechanisms in order to regulate the use of digital tools, with the aim of ensuring compliance with rest periods and leave as well as personal and family life.” The “right to disconnect” was not explicitly defined in the code, but the amendment has followed a line of case law from the French Court of Cassation in the early 2000s that recognised that an employee should not be required to accept working from home (and bringing work materials home with them), or be expected to be reachable on the phone after working hours. This year, the French Court of Cassation ordered a British company to pay a former employee €60,000 because it failed to respect his “right to disconnect” from his phone and emails outside working hours.

Some other countries in Europe have followed France’s lead. For example, Italy enacted a law in May 2017 that required employers of flexible workers to include in their contracts “the technical and organizational measures necessary to ensure that the worker is disconnected from … technological equipment”. More recently, in August 2018, the Irish Labour Court awarded a business executive €7,500 after she was consistently expected to check her emails outside of office hours in contravention of Irish law on working hours.

The idea that we need a right or a law that protects us against the potential harms caused by overwork is not new. In fact, it is reflected in Article 24 of the Universal Declaration of Human Rights, which states that “[e]veryone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.” However, is this a sufficient articulation of the “right to disconnect” we will need in the future? For instance, do we need a “right to disconnect” outside of the employment context? Is an employment right sufficient to fully respect an individual’s decision to be “offline” (i.e. the right to be a “digital hermit”) or an individual’s ability to control when and under what circumstances their devices are “connected” (and collecting data)?

To meet the potential shortcomings inherent in an employment “right to disconnect”, the right could be accompanied by two further formulations. The first could give consumers of Internet of Things (IoT) products the right to offline alternatives or the right to be able to use a product offline. This would mean that all features of an IoT product that do not require a connection should still be available when disconnected. For example, if a consumer buys a “smart fridge” they should have a right to use the fridge’s basic functionality without it having to be connected to the internet. This would help guarantee that, as IoT products begin to replace their non-digital, non-networked counterparts, those who wish to remain offline can do so. The second formulation of the right could protect individuals against being put at a disadvantage purely on the basis that they are offline or have no online “footprint”, which would also prohibit adverse conclusions being drawn against an individual solely on the basis that they lack an online presence. This would help ensure that individuals who wish to remain offline will not be compelled to connect in order to benefit from basic services such as access to welfare or health services.

It is difficult to envisage what the scope of the “right to disconnect” should be without a full understanding of the harms that may be caused by the feeling or expectation of always being connected in our work, social and home lives. More research on this topic would be a welcome addition to this conversation. Nevertheless, the discussion at the “Future-proofing our digital rights” workshop was a useful exercise in thinking about whether the categorisation of the “right to disconnect” as an employment right, as can already be seen in countries such as France and Italy, will sufficiently protect our (non-)digital lives in the future. Which begs the question, if we really need a “right to disconnect”, what would the contours of such a right be?

Future-proofing our digital rights through strategic litigation: how can we win?

By Jelle Klaas, 7th November 2018

In September 2018, I was privileged to take part in the wonderful DFF workshop “Future-proofing our digital rights” in Berlin. With great participants from across Europe – lawyers, legal scholars and activists for digital rights – we explored the near and not-so-near future of digital rights.

One part of the workshop was looking at the future from a more positive perspective (“what would make the digital future bright and full of rainbows”). The other part was about scoping the risks and dangers (“what would make the digital future dark and full of robots”). Read more about this on the DFF blog.

The breakout session I want to tell you about was the session titled “what can we win now?”

The idea for this session was to scope out a few litigation ideas on what could be the digital rights equivalent of a landmark case such as the U.S. Supreme Court examples of Rosa Parks or a Brown v. Board of Education. Such a case wouldn’t immediately bring us a better (digital) world (that, unfortunately, cannot be achieved through litigation alone), but it could strengthen the campaign for such a world and could give the digital rights movement further inspiration and momentum.

Our small breakout group first debated the difference between digital rights issues on the one hand, and more traditional civil rights issues like racism and segregation on the other.

In situations like Brown v. Board of Education or the action by Rosa Parks, a court case that showed the daily practice of structural racism and segregation and its injustice was needed to set change in motion. However, it would appear that many digital rights issues are more abstract to the general public than, for instance, the segregation policy of the U.S. in the 1950’s. It can, therefore, be more difficult to show why violations of these digital rights matter and really hurt people in their everyday lives.

With this in mind, we thought that bringing cases that seem absurd and unrealistic at first, but are nonetheless true, can be one litigation tactic that can be used to bring home the extent to which digital rights are under attack. Cases that convey the feeling: “this could also happen to me.” And cases that almost everyone can relate to and feel threatened by.

We decided to focus on cases regarding the use of algorithms and decisions by computers/artificial intelligence.

We came up with four possible scenarios of cases that could help us “win”:

  • A baby profiled as a terrorist. Unfortunately, we already know of one baby that was profiled based on an algorithm and whose diapers were searched by border police. This is of course absurd (babies do not have detectable political views, and even if they had, they could not act upon them), but we discussed that perhaps not everyone could relate to this (“I have no ‘risky’ name and/or flight patterns, so this does not concern my children”);
  • A person who is charged more to buy a car or get a service, because of their data profile. Quite some research has already been done that shows that algorithms (often containing a fair number of errors) resulted in people wrongly being denied services or credit. A case where a biased algorithm charges more for a service or product (that has nothing to do with the bias) could be a good case to show that algorithmic decision-making can affect everyone.
  • A child taken away from its parents by child care authorities based on data analysis. To detect whether children are at risk of abuse or in an unsafe situation, child protection agencies are making use of algorithms. There have already been incidents reported where children have been taken away from parents (partly) based on biased algorithms. Most people would probably accept that decisions regarding children being taken from their families is one field in which a fully human, contextual viewpoint is needed and where the margin of error should be low. Litigation around an awful situation that shows how this can go wrong could be very powerful.
  • Essential medical supplies not being delivered to a household because the area is deemed off limits. We have already seen this happen: some areas, with a bad reputation or poorer population, receive worse or no delivery of products by some companies. This policy can stem from algorithms based on discriminatory data. A possible litigation strategy would be to go for a house right next to a poor area, to show the spillover effects of such algorithms’ decision-making.

Interesting raw first ideas, in my opinion, that could be expanded upon and further explored.

The workshop reminded us that we have to prepare for the future. Proactively thinking about campaigning and strategic litigation, not just as a reaction to a new harmful practice, is a very good way to do so. Creating or finding the case or incident that can really grab the attention of a larger group of people, outside the digital rights community, could help build a stronger, more cross-disciplinary movement or make these already existing movements bigger and broader.

The digital rights movement is already very aware of the digital rights violations in the present. It could also prepare to find, amplify and effectively respond to incidents in the future that, for instance, warn the general public about algorithms and computer decisions.

About the author: Jelle Klaas is the litigation director of the Nederlands Juristin Comité voor de Mensenrechten (NJCM — Dutch Legal Committee for Human Rights) and responsible for their Public Interest Litigation Project (PILP).