Protecting Children’s Digital Rights in Schools

By Nani Jansen Reventlow, 30th July 2020

Children sitting in a classroom

This post was written in association with The State of Data 2020, an event organised by defenddigitalme, who advocate for children’s privacy in data and digital rights. 

In recent years, cases have been decided across Europe that raise serious issues about the digital rights of children in schools.

Unsurprisingly, schools are not immune to the increased digitisation sweeping through our societies. Around Europe, many schools have already begun to introduce tools that collect and process students’ biometrics, such as their faces and fingerprints, as well as other personal, sensitive data.

It sounds worrying – and it is. Luckily, however, children’s personal data has special protection under the GDPR.

Processing children’s personal data in the education context isn’t entirely off limits under this regulation. It’s acceptable for schools to use such data if there is deemed a good reason for it: for example, if it’s in the interest of the child or the public. This could include monitoring immunisations, keeping attendance, or developing online learning platforms that further the education of the child.

But there are, of course, caveats. For one, the GDPR sets higher standards of justification when it comes to processing people’s unique, biometric data, given that it is an extremely sensitive and invasive form of data collection. Also, if schools don’t ensure their students’ data are completely secure, or if they collect more data than they actually need, they can end up breaching GDPR.

As we’ll see below, real life examples of such breaches have already begun to surface.

Taking Attendance through Facial Recognition

First we’ll go to Sweden, where the Secondary Education Board in the municipality of Skellefteå began using facial recognition in 2018 to monitor attendance at a secondary school.

In this case, 22 students were monitored by cameras as they entered the classroom every day. The purpose of this experiment was to see if automation could save time in registering students’ attendance. The school hoped that the time saved could then be used for teaching. The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

Similarly, in France in 2019, a regional authority launched an experimental security project in two high schools –– one in Marseille and one in Nice –– installing facial recognition gates at the entrance of the schools. The gates identified students using facial images, and also scanned for unidentified visitors. The students whose data would be processed had given their consent.

The outcome? Both the Swedish and French Data Protection Authorities found that using facial recognition technology to monitor students breached the GDPR, specifically Article 5 and Article 9.

Under Article 5, personal data processing must be “adequate, relevant, and limited” to what is necessary to carry out a specific purpose.

Article 9, on the other hand, makes clear that processing biometric data is permissible only under strict conditions. One of those conditions is “explicit consent” – something that both schools used to justify their experiments. However, due to the imbalance of power between the data subject –– the students –– and the data controller –– the school –– both data protection authorities rejected the fact that consent was given freely.

However, due to the imbalance of power between the the students and the school, both data protection authorities rejected the fact that consent was given freely

As well as this, both authorities found that facial recognition technology infringed upon the personal integrity and data protection rights of students to a degree that was disproportionate to the purposes being pursued.

The Swedish data protection authority actually went further, and found that, given how little we still know about the potential risks and consequences of deploying facial recognition, the school should have consulted with the authority for advice before using it.

Fingerprint Scanning for Student Meals

Another key case occurred in Poland, and concerned not facial recognition, but fingerprint collection.

Starting in 2015, Primary School No. 2 in Gdansk processed the fingerprints of 680 children as a method of verifying meal payment at the school canteen.

Once again, the school had obtained consent from the guardians of students whose biometric data was being processed. Four children opted out, and paid using an alternative method. However, these students had to wait at the back of the lunch queue until all the students using biometric data verification had passed through the canteen.

Just as the Swedish and French schools had done, the Polish school argued that it had obtained “explicit consent.” But, once again, the Polish data protection authority rejected this point. This time, the authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally – thereby placing pressure on students to consent.

The authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally

As well as this, the authority found, once again, that biometric data collection wasn’t necessary in this case, given that there were alternative, less invasive methods for students to verify their meal payments.

Unsecure Apps for Parents, Students and Teachers

Moving onto Norway, the Education Agency of the Municipality of Oslo launched an app in 2019 that allowed parents and students to send messages to school staff.

Soon after the app launched, Aftenposten, a widely read Norwegian news outlet, broke the news that the app contained a major security vulnerability. Apparently anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children in Oslo.

Anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children

The Norwegian data protection authority found that the Education Agency had breached the GDPR – Article 5.1(f) – by launching an app without properly testing it to ensure the information on it would be stored securely.

The authority also found that the Education Agency breached Article 32 of the GDPR, which requires data controllers to ensure a level of security appropriate to the risk. In light of the special vulnerability of children, the Education Agency did not take sufficient measures to ensure the confidentiality and integrity of its messaging system.

As a result of the findings, the data protection authority imposed a fine on the Education Agency. It also imposed a similar fine on the Municipality of Bergen, where data security insufficiencies exposed the personal data of students by allowing unauthorised users to access the school’s administrative system, as well as the digital learning platform where students’ classwork, evaluations, and personal data were stored.

Recurring Patterns

In considering whether a school violated the data protection rights of students, the data protection authorities of Sweden, France, Poland, and Norway all drew attention to the relationship between a school and its students. Schools are in a position of authority when they monitor students’ data, and students are in a position of dependency. The power imbalance in this relationship will most often invalidate consent as a lawful condition for biometric data processing.

As well as this, the particularly sensitive and vulnerable nature of children’s data processed in a school context –– often relating to issues such as health, identity and development – requires schools to protect students’ personal data with particularly secure technical measures.

Educational authorities carry out many tasks that require collecting personal data from students. Technologies, including biometric technologies like facial recognition and fingerprinting, present new opportunities for schools to expedite processes like attendance monitoring, school security, and lunch payment.

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency.

Whatever measures schools adopt, they must safeguard children’s rights and be necessary and proportionate to the purpose pursued. If there are alternative measures that do not require the processing of sensitive personal data, and those measures can achieve the same results, then the processing of sensitive personal data is not likely to be justified under the GDPR.

The Power of Class and Mass Action for Digital Rights

By Antoin O'Lachtnain, 27th July 2020

Dense crowd of people

As big online companies “move fast and break things,” there are beneficiaries and victims. No one denies the benefits of digitalisation, but seemingly beneficial technology has in many respects turned into a free-for-all assault on users’ attention and privacy.

This new social space, in which we interact, and the workplace, in which we earn a living, is often turned into a battle ground as a result. In the process, collateral damage is sustained by individuals’ data protection rights, their privacy and livelihoods. This damage is both at an individual and societal level. 

“Class actions” and “mass actions” are a way of bringing together large groups of people who have been hurt at an individual level, allowing them to band together to get redress.

The power of the action is the scale. The damage in each case is very small: some location data here, some information about religious affiliation there. Each piece on its own is almost harmless. The harm comes when it is combined with data from other sources, and with data about billions of other users, and sold or traded for a purpose like advertising.

The harm is like one of a thousand cuts. Each one could be survived, but the combination has a real effect. Even then the damage in each case is probably relatively small. But the impact overall is large, because platforms like Facebook, Gooogle and Amazon collect and use personal data about billions of users.

The harm is like one of a thousand cuts. Each one could be survived, but the combination has a real effect

As with any legal field, there is a multitude of jargon and technicalities, but it might help to explain the most important distinction between a “class action” and a “mass action.” In a “class action,” the plaintiffs in court and their lawyers represent everybody in the same situation as they are in, regardless of whether they have formally joined the action and regardless of whether they even know about it. This type of action emerged in the United States in the 1960s and became an important mechanism for redressing breaches of consumer rights, civil rights and environmental standards.

Since then, Australia has adopted “class actions” and some countries in Europe have adopted limited forms of “class actions” which allow consumer groups to take actions on behalf of a class of consumers. 

A “mass action” is a more general, less legally specific term. In a “mass action,” large numbers of people are represented in a single complaint, with the same or very similar facts and laws applying to all their situations. They may or may not represent a class in the sense of a “class action.” Nonetheless, a single complaint with thousands of complainants pursuing a shared goal has the potential to set an important and impactful legal precedent. 

Hitting Companies Where it Hurts

Damages, namely a payment to the complainant by the responsible party as compensation, are a critical issue. Not all actions will involve damages, but the prospect of damages will make the cause a much bigger concern for the digital company involved. 

The obvious benefit of successful class or mass actions is that they provide justice to individuals involved, especially if damages are payable. But the benefit is potentially much bigger.

The prospect of class and mass actions is going to be a major deterrent for major online companies, especially for the largest and most profitable of them

The prospect of class and mass actions is going to be a major deterrent for major online companies, especially for the largest and most profitable of them. These companies simply cannot afford to lose a series of cases like this, because it would inevitably result in a snowball of follow-on cases from victims in a similar situation. The company would have to make a financial provision for losses and potential losses through damages awards, and this would impact adversely on its stock price. Needless to mention that even if no damages were payable in one case, follow-on claims for damages would be a very real prospect.

In the medium to long term, the effect of these actions will be that the digital companies will have to avoid such ill effects by controlling their own behaviour, by having regard to the rights of their users, and by improving their data protection and other human rights practices to the point where they are able to stand over them and win in court.

How to Harness Mass Action

So, what does it take to mount a mass or class action?

The cause of action is very much the starting point. There needs to be wrongdoing, and ideally damages need to be due as a result of that wrongdoing.

One example of such a wrong is breach of data protection rights. The GDPR provides specifically that damages are payable where data protection rights are breached. Equally if a group of people (for example, an ethnic group) suffered harm as a result of something that happened online and which could have been prevented, then there might well be a case that could be brought as a mass or class action.

One example of such a wrong is breach of data protection rights. The GDPR provides specifically that damages are payable where data protection rights are breached

The practicalities of a class or mass action are manifold. Firstly, there needs to be a clear case that is applicable to large numbers, preferably millions or tens of millions of people.

Secondly, the action needs to be publicised to get an adequate pool of claimants. It needs public awareness and complainants need to sign up to participate. Each of these individual clients needs to be dealt with and their particular case may need to be managed.

Thirdly, the action needs the right legal representation. These cases will break new ground and will likely be complex, and this will place great demands on the legal team.

Fourthly, the case needs to be funded because a case of this type will be expensive to undertake, due to the complexity of the subject matter and because the online company will fight tooth and nail to avoid the precedent a loss would set. 

Maybe it goes without saying that the case needs an appropriate legal form. The laws of the jurisdiction need to provide strong enough protections to allow you a prospect to win the case, and they also need to provide for a mechanism for you to operate your class or mass action.

By way of example, the GDPR provides for non-profit organisations to take cases on behalf of a group of affected individuals, but only in countries that have opted into this particular provision in their national law. The case has to be in the “right” jurisdiction, taking into account where the target company has a presence and where the complainants are located. The mechanism chosen has to provide adequate redress.

The judicial forum chosen also matters. Typically, data protection authorities do not have the power to award damages to wronged data subjects. The complainants must usually bring their complaint to the courts of the country to collect such damages.

There are many obstacles to class and mass actions and many pitfalls to be avoided. But class and mass actions have the potential to make a big practical difference in relation to digital rights.

Online companies move famously fast and their bad behaviour affects billions of people in small but insidious ways

Online companies move famously fast and their bad behaviour affects billions of people in small but insidious ways. Class and mass actions can be a way for the legal system to provide a retrospective judgment on these companies’ actions, and orders which result in payment of money as damages give these judgments strong “teeth.” Companies cannot take these judgments lightly, because they can no longer just think about what they can get away with in the short term.

Class and mass actions are a way to send a message to digital companies (and their investors) that they will ultimately need to face up to and pay for the negative consequences of their profit-making activities.

Antoin O Lachtnain is a Director of Digital Rights Ireland, and is Managing Director of ex muris, a firm providing data services to the energy industries.

Image by Chuttersnap on Unsplash

Tackling the Impact Measurement Challenge

By Patrick Regan, 22nd July 2020

A blurry sweep of blue and white lines.

In early June 2020, a number of DFF grantees came together virtually for DFF’s first ever “outcomes harvesting” workshop.

This marked the next step in DFF’s plans to pilot a new framework for monitoring the impact of strategic litigation to advance digital rights. But what is “outcomes harvesting”? What is this new framework? Why is it important?

Measuring the impact of strategic litigation is no easy feat.

Litigation can take years, the context is often complex, and there are many factors outside our control which could influence the effectiveness of the litigation. Often, we have only limited (if any) access to key decision makers involved to gain insight into how law, policy or practice has actually changed.

This makes it challenging not only to identify impact, but also to assess your contribution when there is impact.  This is compounded by the lack of readily available evaluation tools and methodologies tailored to strategic litigation.

Many NGOs and litigators do not have the time or resources to reflect, evaluate and collect robust evidence to learn from their advocacy activities. However, measuring and demonstrating the impact of litigation is imperative for the digital rights field to work more efficiently and effectively, and attract support for this type of work. So how can we overcome these challenges?

I would be embarrassed to admit the number of times I lay awake pondering the different ways to solve this problem. Thankfully, my pondering was not in vain. In early 2019, I found out that DFF had noticed the same issues related to measuring the impact in strategic litigation, and decided to try and do something about it. This served as a jumping off point to develop an exciting (if you are an evaluation nerd like me) new framework with DFF to help measure the impact of strategic litigation in digital rights.  

Developing a Pilot Framework

DFF wanted to develop something that was specific to strategic litigation in the digital rights field, but which could be easily implemented and adapted for different organisations. It was imperative to strike the right balance between something that had enough rigour to be reliable, but also light enough that it would not demand too many (already scarce) resources to use.

Building on my own previous experiences of adapting tools for strategic litigation impact assessment, reviewing available literature and case studies, and talking to other organisations in the field to check relevance and draw inspiration, we developed a framework based on three key components:

  • A thematic framework of outcome themes and impact types to help guide the process of monitoring, identifying and analysing outcomes.
  • A methodology for capturing “outcome statements” (which takes inspiration from a methodology called “outcomes harvesting”).
  • A set of evidence principles to add a lawyer of rigour and quality control.

The framework was published in December 2019 under a creative commons license so it can be easily shared and adapted.

Putting the Framework into Action

On 10 and 15 June 2020, eight DFF grantees came together for a workshop to learn about the framework, marking the next step in DFF’s plans to pilot the framework and put it into practice.

During the first part of the workshop, we discussed some common challenges in measuring impact in strategic litigation and digital rights. This was followed by a short introductory training on the outcome harvesting methodology. 

The group raised some important challenges and questions. How can you measure and understand how different groups and communities are impacted by the litigation? What impact does the litigation have when the state in question does not implement a judgment? How can you attribute your work to higher level impacts when so many others are involved? How can we communicate the impact of strategic litigation to a wider audience when both the process and the issues litigated are complex and technical?

How can we communicate the impact of strategic litigation to a wider audience when both the process and the issues litigated are complex and technical?

The framework seeks to respond to some of these challenges. It provides a tool to help document and identify key outcomes and changes, and helps to identify your contribution to these changes. It also gives a structured way to capture learnings, unexpected outcomes and to seek out negative outcomes so we can better mitigate against them in the future.

It’s Not All About the Judgment

The direct legal outcomes of strategic litigation are often significant and important. For example, obtaining a positive judgment that establishes precedent and new case law, or redress for the claimants.

The framework also encourages you to think about the other ways in which litigation may be generating positive results and changes

However, the framework we developed also encourages you to think about the other ways in which litigation may be generating positive results and changes, at all stages of the litigation process. This could be by increasing public awareness or changing public perception on an issue; changing the way a certain topic is presented in the media; or even the impact that the act of taking the case might have on your own organisation, network or the wider field.

“Sometimes we can focus too much on the bigger, longer term aims” commented one participant. This was an important point of discussion during the workshop. Another workshop participant commented on the approach that “it’s a good way to start thinking outside of the box in terms of impacts, to think about all of the things and areas your litigation might effect”.

The framework encourages anyone using it to consider these different layers of outcomes and to appreciate their value and contribution to other, broader outcomes and impacts.

Evidence Standards

During the workshop we also discussed data and evidence quality, considering what was both realistic to collect and credible enough to be used in measuring outcomes. The group discussed some key questions to help think through evidence quality, such as:

  • Is the outcome you identified, and your contribution towards it, proportional to your evidence? (i.e. is your contribution claim realistic based on the evidence you have?)
  • Is the contribution you are claiming proportional to the size, scale and timing of your litigation/advocacy?
  • Has the evidence been peer reviewed or independently verified by anyone else?
  • Are the voices of those we claim to have had an impact on represented in the evidence?

The pilot framework has been designed so that organisations can use the data they might already capture, their existing monitoring or evaluation systems, and also anecdotal information that they have access to. It proposes a set of evidence principles to help assess the reliability of the evidence and outcomes you have identified.

Putting the Framework to the Test

After the first workshop, the group then put the methodology into practice by identifying and preparing a number of outcomes concerning their litigation projects. During the second part of the workshop, the group shared some of their outcomes as well as their reflections on using the methodology. We also shared and reflected on some of the challenges, successes and learnings when engaging in digital rights litigation.

The group “harvested” over 30 different outcomes which ranged from establishing important and novel legal precedents and influencing the way in which certain topics are portrayed in the media, to more organisational and field-related outcomes, such as prompting others to engage in legal action or developing and applying knowledge gained through their pre-litigation research to improve the quality and reach of other advocacy campaigns. 

They thought about what “impact” would actually look like in reality and what changes they might be able to observe

Some organisations who were at an earlier stage of their litigation used the method to help identify and articulate their desired outcomes. They thought about what “impact” would actually look like in reality and what changes they might be able to observe. This helped them to plan what kind of evidence or data they would need to collect to know if this change has happened.

The framework and methodology is of course only one potential solution. It makes use of a number of different research and evaluation principles that can be used and adapted flexibly in a variety of situations and contexts. As one participant commented, “it seems like a useful way to understand and develop an idea of what your role is in a bigger change, and what part the litigation played”. 

DFF will continue to pilot the framework through its grantmaking and by holding more outcomes workshops. If you would like to know more about the framework, or attend a future DFF outcomes workshop, please email DFF’s Programme Officer.

Patrick Regan is an independent evaluation consultant specialising in the evaluation of projects concerning human rights and which use litigation/the law to drive social change. His consultancy, the Rights Evaluation Studio, provides a range of evaluation and project design services.