The “right to disconnect”: to what extent will we need a right to unplug from our digital lives?

By Jonathan McCully, 9th November 2018

It seems inevitable that, in the not too distant future, we will be living astoundingly connected lives. In Europe, internet access and the frequency of internet usage continues to rise. In 2017, 72% of EU citizens were reported as having accessed the internet every day, compared to 56% in 2011. Furthermore, the use of information and communication technologies is infiltrating our lives in more and more ways: we use them to carry out our jobs, declare our taxes, shop, bank, and keep in touch with family and friends. With these trends set to continue into the future, should governments and corporations protect us against the harms that might accompany the feeling or expectation of having to be online and connected at all times?

This question was discussed at DFF’s “Future-proofing our digital rights” workshop, during which participants looked at whether and to what extent we might need a “right to disconnect” as part of our human rights framework. Before considering what a “right to disconnect” would look like, the group considered the potential harms such a right would seek to protect us against.

At one level, the “right to disconnect” could be seen as a means of protecting personal data by providing individuals with control over when and under what circumstances their digital devices are connected and therefore collecting and potentially transmitting data about them. At another level, the right could be viewed as a means of ensuring respect for individual autonomy and self-determination by allowing individuals to live out their lives without having to be “online” and connected (i.e. a right to be a “digital hermit”). It could also be viewed as a right aimed at protecting the physical and mental integrity of a person. Some studies have shown greater levels of reported stress in those who feel the need to constantly check their emails, texts, or social media accounts, while other studies have shown a link between overuse of some technologies, such as social media sites and entertainment-on-demand, and poor physical and mental health. That being said, it still remains unclear to what extent we are truly harmed by being constantly or regularly connected.

One area in which the harms of being constantly connected have been explored more deeply is in the employment context. The pressure on employees to check emails outside of work has been linked to burn out, poor emotional well-being and strain on personal relationships and the feeling of having to check emails outside of working hours can have a negative impact on an employee’s health and well-being. So, should the “right to disconnect” be formulated as a right under employment law, i.e. a right that permits employees to log off from their work devices when outside working hours?

France has led the way on this interpretation of the “right to disconnect” (le droit à la déconnexion). In 2016, the French Labour Code was amended to require employers to implement “procedures for the full exercise by the employee of his/her right to disconnect” and to “establish[] . . . control mechanisms in order to regulate the use of digital tools, with the aim of ensuring compliance with rest periods and leave as well as personal and family life.” The “right to disconnect” was not explicitly defined in the code, but the amendment has followed a line of case law from the French Court of Cassation in the early 2000s that recognised that an employee should not be required to accept working from home (and bringing work materials home with them), or be expected to be reachable on the phone after working hours. This year, the French Court of Cassation ordered a British company to pay a former employee €60,000 because it failed to respect his “right to disconnect” from his phone and emails outside working hours.

Some other countries in Europe have followed France’s lead. For example, Italy enacted a law in May 2017 that required employers of flexible workers to include in their contracts “the technical and organizational measures necessary to ensure that the worker is disconnected from … technological equipment”. More recently, in August 2018, the Irish Labour Court awarded a business executive €7,500 after she was consistently expected to check her emails outside of office hours in contravention of Irish law on working hours.

The idea that we need a right or a law that protects us against the potential harms caused by overwork is not new. In fact, it is reflected in Article 24 of the Universal Declaration of Human Rights, which states that “[e]veryone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.” However, is this a sufficient articulation of the “right to disconnect” we will need in the future? For instance, do we need a “right to disconnect” outside of the employment context? Is an employment right sufficient to fully respect an individual’s decision to be “offline” (i.e. the right to be a “digital hermit”) or an individual’s ability to control when and under what circumstances their devices are “connected” (and collecting data)?

To meet the potential shortcomings inherent in an employment “right to disconnect”, the right could be accompanied by two further formulations. The first could give consumers of Internet of Things (IoT) products the right to offline alternatives or the right to be able to use a product offline. This would mean that all features of an IoT product that do not require a connection should still be available when disconnected. For example, if a consumer buys a “smart fridge” they should have a right to use the fridge’s basic functionality without it having to be connected to the internet. This would help guarantee that, as IoT products begin to replace their non-digital, non-networked counterparts, those who wish to remain offline can do so. The second formulation of the right could protect individuals against being put at a disadvantage purely on the basis that they are offline or have no online “footprint”, which would also prohibit adverse conclusions being drawn against an individual solely on the basis that they lack an online presence. This would help ensure that individuals who wish to remain offline will not be compelled to connect in order to benefit from basic services such as access to welfare or health services.

It is difficult to envisage what the scope of the “right to disconnect” should be without a full understanding of the harms that may be caused by the feeling or expectation of always being connected in our work, social and home lives. More research on this topic would be a welcome addition to this conversation. Nevertheless, the discussion at the “Future-proofing our digital rights” workshop was a useful exercise in thinking about whether the categorisation of the “right to disconnect” as an employment right, as can already be seen in countries such as France and Italy, will sufficiently protect our (non-)digital lives in the future. Which begs the question, if we really need a “right to disconnect”, what would the contours of such a right be?

Future-proofing our digital rights through strategic litigation: how can we win?

By Jelle Klaas, 7th November 2018

In September 2018, I was privileged to take part in the wonderful DFF workshop “Future-proofing our digital rights” in Berlin. With great participants from across Europe – lawyers, legal scholars and activists for digital rights – we explored the near and not-so-near future of digital rights.

One part of the workshop was looking at the future from a more positive perspective (“what would make the digital future bright and full of rainbows”). The other part was about scoping the risks and dangers (“what would make the digital future dark and full of robots”). Read more about this on the DFF blog.

The breakout session I want to tell you about was the session titled “what can we win now?”

The idea for this session was to scope out a few litigation ideas on what could be the digital rights equivalent of a landmark case such as the U.S. Supreme Court examples of Rosa Parks or a Brown v. Board of Education. Such a case wouldn’t immediately bring us a better (digital) world (that, unfortunately, cannot be achieved through litigation alone), but it could strengthen the campaign for such a world and could give the digital rights movement further inspiration and momentum.

Our small breakout group first debated the difference between digital rights issues on the one hand, and more traditional civil rights issues like racism and segregation on the other.

In situations like Brown v. Board of Education or the action by Rosa Parks, a court case that showed the daily practice of structural racism and segregation and its injustice was needed to set change in motion. However, it would appear that many digital rights issues are more abstract to the general public than, for instance, the segregation policy of the U.S. in the 1950’s. It can, therefore, be more difficult to show why violations of these digital rights matter and really hurt people in their everyday lives.

With this in mind, we thought that bringing cases that seem absurd and unrealistic at first, but are nonetheless true, can be one litigation tactic that can be used to bring home the extent to which digital rights are under attack. Cases that convey the feeling: “this could also happen to me.” And cases that almost everyone can relate to and feel threatened by.

We decided to focus on cases regarding the use of algorithms and decisions by computers/artificial intelligence.

We came up with four possible scenarios of cases that could help us “win”:

  • A baby profiled as a terrorist. Unfortunately, we already know of one baby that was profiled based on an algorithm and whose diapers were searched by border police. This is of course absurd (babies do not have detectable political views, and even if they had, they could not act upon them), but we discussed that perhaps not everyone could relate to this (“I have no ‘risky’ name and/or flight patterns, so this does not concern my children”);
  • A person who is charged more to buy a car or get a service, because of their data profile. Quite some research has already been done that shows that algorithms (often containing a fair number of errors) resulted in people wrongly being denied services or credit. A case where a biased algorithm charges more for a service or product (that has nothing to do with the bias) could be a good case to show that algorithmic decision-making can affect everyone.
  • A child taken away from its parents by child care authorities based on data analysis. To detect whether children are at risk of abuse or in an unsafe situation, child protection agencies are making use of algorithms. There have already been incidents reported where children have been taken away from parents (partly) based on biased algorithms. Most people would probably accept that decisions regarding children being taken from their families is one field in which a fully human, contextual viewpoint is needed and where the margin of error should be low. Litigation around an awful situation that shows how this can go wrong could be very powerful.
  • Essential medical supplies not being delivered to a household because the area is deemed off limits. We have already seen this happen: some areas, with a bad reputation or poorer population, receive worse or no delivery of products by some companies. This policy can stem from algorithms based on discriminatory data. A possible litigation strategy would be to go for a house right next to a poor area, to show the spillover effects of such algorithms’ decision-making.

Interesting raw first ideas, in my opinion, that could be expanded upon and further explored.

The workshop reminded us that we have to prepare for the future. Proactively thinking about campaigning and strategic litigation, not just as a reaction to a new harmful practice, is a very good way to do so. Creating or finding the case or incident that can really grab the attention of a larger group of people, outside the digital rights community, could help build a stronger, more cross-disciplinary movement or make these already existing movements bigger and broader.

The digital rights movement is already very aware of the digital rights violations in the present. It could also prepare to find, amplify and effectively respond to incidents in the future that, for instance, warn the general public about algorithms and computer decisions.

About the author: Jelle Klaas is the litigation director of the Nederlands Juristin Comité voor de Mensenrechten (NJCM — Dutch Legal Committee for Human Rights) and responsible for their Public Interest Litigation Project (PILP).

DFF and SHARE host second litigation retreat: strategising on digital rights in the heart of Belgrade

By Jonathan McCully, 31st October 2018

This month, we were very happy to be working again with our friends at the SHARE Foundation to host our second litigation retreat. On this occasion the retreat brought together twelve digital rights litigators from across Europe in bustling Belgrade, Serbia to share and further develop their strategic litigation skills.

At the retreat, representatives from nine organisations that work on defending rights and freedoms in the digital space came together: Access Now, Amnesty International, Digital Security Lab Ukraine, Human Rights Monitoring Institute, Irish Council for Civil Liberties, noyb, Open Rights Group, Privacy International, and the Public Interest Litigation Project. Each of these organisations also share an interest in using litigation as a means to ensure changes in law, policy or practice to enhance the protection of rights and freedoms in the digital sphere.

The retreat was an opportunity for litigators to get away from the office and focus the mind on litigation work in a collaborative environment. All participants came to the retreat with a case that they were working on, and that they could strategise and plan around. The cases workshopped during the retreat dealt with a range of digital rights issues: from website blocking and surveillance, to challenging data retention regimes and securing enforcement of the General Data Protection Regulation.

Alongside the workshopping of specific cases, the four-day retreat involved a mixture of group work, plenary discussion, and substantive knowledge sharing sessions dealing with a range issues from case management and campaigning around a case, to building and implementing a litigation strategy. We also had an opportunity to hear from Senior Advisor to the UN Special Rapporteur on Extreme Poverty and Human Rights, Christiaan van Veen, who discussed the potential role of the UN Special Mechanisms in maximising impact in strategic cases and the work some UN mandates are currently doing on digital rights.

By the end of the retreat, participants left with the core elements of a comprehensive litigation and advocacy plan for their cases. One participant noted that the retreat was a “great and enriching experience that gave me practical tools for future use.” Another participant observed that the retreat “itemised how litigation strategy is but one piece of the advocacy puzzle.” The retreat also facilitated a better understanding of what other European digital rights litigators are currently working on. One participant noted that “[t]he collaborative efforts on digital rights across Europe is a lot more extensive and diverse than I knew.”

The agenda and materials used during the retreat were developed on the basis of input provided in follow-up conversations with members of our network interested in working on skill-building and skill-sharing after our February strategy meeting, as well as feedback provided by participants from our July retreat in Montenegro. We further benefitted from the expert guidance of Allen Gunn from Aspiration, who helped ensure that both our retreats fostered co-learning between participants in a collaborative environment. We would also like to thank one of the participants from our July retreat, Nevena Krivokapić, a lawyer at the SHARE Foundation, who joined us again as a co-facilitator for the week. “I had a fantastic and unique experience co-facilitating this event,” she said, “I enjoyed being able to spend a few days with the next generation of digital rights defenders, who shared very valuable insights for my future work.”

The litigation retreats form part of DFF’s work in supporting skill-building and skill-sharing amongst the field. This work will continue into 2019, with DFF supporting two thematically focussed litigation meetings. A call for applications for the first of these meetings, which will take place around May and focus on litigation around the GDPR, can be found on our website (deadline 30 November 2018). We are also working on a project to develop strategic litigation toolkits, which will include materials developed during the litigation retreats. We hope to share more information about this in the not-too-distant future!