Supporting Long-Term Impact: Announcing Changes to DFF’s Grantmaking

By Nani Jansen Reventlow, 29th November 2020

DFF is coming to the end of its first 3-year strategy cycle, what we informally have referred to as the “pilot phase”. The end of our pilot phase also means the closure of our current grantmaking process.

DFF will not accept any further grant applications in 2020 as we are working to finalise a new, revised process, which we are looking forward to launching in early 2021.

DFF has changed significantly since we were first introduced to the world in October 2017. Based on input from the digital rights field, we established a strategy and priorities for our funding, we launched a grantmaking process, coordinated annual strategy meetings, facilitated strategic litigation retreats, and held field building workshops on topics like the GDPR, algorithm use, and competition law.

We were very happy that a recent external evaluation concluded these activities are seen as adding important value to the digital rights field.

While we were able to take important first steps in supporting the digital rights field during the pilot phase, there are areas where we can do more to make sure we are best serving its needs. One area of particular importance is our grantmaking.

The current grantmaking process was launched in July 2018, after being developed in dialogue with the digital rights field. By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million, supporting the litigation and pre-litigation research projects of 30 different organisations and individuals across Europe. Many of these projects are detailed on our case study page and in our annual report, with more to be published over the coming months.

By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million

Building on the participatory approach taken in developing the initial grantmaking process, DFF has continued to revise its grantmaking throughout the pilot phase. We actively sought feedback through regular surveys, outreach, and conversations with applicants and grantees. In response to questions about the scope of our grants or what we expected to see in applications, we published guides and a frequently asked questions page to help organisations prepare their applications.

When we received feedback that the application process took too long, we developed a “fast-track” application process for grantees moving from one grant to another. We have also sought to add value in other ways: for example, in 2019, we developed a new framework to better capture the outcomes and impacts of strategic litigation.

We are proud of the many great projects we have been able to support over the last three years, and hope that the efforts we made to improve our processes during that time managed to address at least a good part of the needs of our grantees. But: there are some major limitations we can only overcome by changing the scope of the grantmaking process itself.

There are some major limitations we can only overcome by changing the scope of the grantmaking process itself

For example, many organisations have requested support for adverse costs – costs to cover possible court orders to pay the fees of the opposing party in the case of a loss – noting these costs are a major barrier to taking public interest litigation. While recognising that this was a welcome type of support, we also wanted to do justice to the complexity of the issue. How do you help litigators absorb the negative impact an adverse cost order might have on their operations, but without disincentivising courts to take the public interest into account and without, through funding support, incentivising frivolous litigation – the very thing cost orders are supposed to discourage?

We also wanted to make sure we adopted an approach that would be equitable across all of the geographical region DFF serves, and not just certain jurisdictions. Following detailed research and consultations, from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions. Our new policy seeks to maintain a balance between helping to mitigate the impact of cost orders on digital rights litigators, while making sure DFF doesn’t unintentionally encourage the practice itself.

…from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions

The biggest limitation of DFF grantmaking during the pilot phase was our inability to support long-term projects over multiple litigation instances. DFF was established to provide grants supporting strategic litigation, i.e. litigation that has an impact beyond the parties involved in the case and that leads to legal, policy or social change. However, this kind of impact often takes more than one instance of litigation to achieve. While providing this type of support was envisaged from the very early stages of DFF’s development, DFF being a young organisation made this impossible to implement from the start.

We are working to change this in 2021, when we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation. We are in final discussions to ensure the process carefully balances the demands of the field with operational constraints, and gets support from our funders and the DFF Board.

…we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation

The new process will also build on the lessons learned from the COVID-19 Litigation Fund, which was our first time providing grants of this nature. We hope the new process will allow applicants to more effectively plan their cases over a long period, with the confidence that they will be able to see a case through to the highest level necessary, and also provide an incentive to invest in building long-term strategies, coalitions and campaigns with other partners.

The end of the current grantmaking process will not affect current grantees, whose projects will carry on until they are completed. DFF will continue to move the small number of remaining active applications through our current process, with final decisions to be made before the end of 2020.

Look out for more details from us later in the year and please get in touch if you have questions or comments in the meantime –– as always, we welcome your feedback and input. We look forward to supporting more strategic litigation efforts to advance digital rights across Europe in 2021 and beyond.  

Decolonising Digital Rights: Why It Matters and Where Do We Start?

By Nani Jansen Reventlow, 23rd October 2020

This speech was given by DFF director, Nani Jansen Reventlow, on 9 October as the keynote for the 2020 Anthropology + Technology Conference.

The power structures underlying centuries of exploitation by one group of another are still here.

Besides the fact that we, in reality, still have over 60 colonised territories around the world today, maintained by 8 countries (though the UN General Assembly would disagree with that number), colonisation has taken on many different forms, including in and through technology.

What does this mean for our societies? What would things look like if they were different? How do we get there –– or: how do you decolonise society? How do you decolonise technology? And how do you decolonise digital rights?

I will start this talk with a spoiler: I will not be able to provide you with an answer to these fundamental questions. What I will try to do in the next half hour, is tell you something about the problems we at the Digital Freedom Fund are seeing in Europe when it comes to digital rights and what is often euphemistically referred to as “diversity and inclusion“.

I will also tell you about what we are doing to try to set in motion a process to fundamentally change the power structures in the field that works on protecting our human rights in the digital context.

But before we get there, we first need to take a look at what the problem is and why we have it.

Sketchnote by Gunjan Singh

What’s the problem?

Today’s conference is centred around “championing socially responsible AI,” but: what does this mean? From a human rights perspective, many AI-related digital rights conversations tend to focus on the right to privacy and data protection. In doing so, these conversations often miss the full extent of the social impact new technologies can have on human rights. This is one of the reasons why it is so important that we decolonise the digital rights space and encourage an intersectional approach to AI and human rights issues.

Let me illustrate some of the issues in the context of the themes of this conference, starting with:

Health tech

The implications of health tech on individuals is a prominent conversation at the moment as we are facing the COVID-19 pandemic.

Big Data solutionism is pervading coronavirus responses across the globe, with contact tracing, symptom checking, and immunity passport apps being rolled out at rapid speed.

These technologies are often not properly tried and tested, and it is clear that privacy and data protection often has not been front and centre for those developing the technology, let alone other human rights considerations. They also illustrate the degree to which our technology and the way we deploy it is colonised.

The UK, for example, launched a “Test and Trace” system in England and Wales in May this year, without, it later admitted, having properly conducted a data protection impact assessment. This admission came after the NGO Open Rights Group had threatened legal action. The heavily criticised app was abandoned and, since late September, a new one is available, which addresses some of the privacy concerns previously raised.

But: is privacy the only issue we should be looking at when considering the viability of using tracing apps to combat a public health emergency? Limiting our analysis to privacy and data protection alone results in blind spots on many of the broader issues at play, such as discrimination and access to healthcare. A few examples. 

To be able to download and use the app, you need to have a relatively new phone, with the right operating system installed. This means that those who are unable to afford or don’t have direct access to technology are excluded.

There also is an assumption that each user would be uniquely linked to a phone. And of course, in order to download the app and receive warning and notifications, you will have to be online.

To put it simply: the effectiveness of the app is based on an assumption that the “average person” in society is the exclusive owner of a new smartphone with reliable access to the internet. Researchers at Oxford University have estimated that more than half the population of a country would need to make use of a tracing app in order for it to be effective.

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability? Should we want to use an app at all if it is not effective for the protection of everyone in our society?

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability?

Concerns around health data, however, are not new to the COVID-19 context. The UN Special Rapporteur on privacy has recognised that medical data is of “high value” for purposes such as social security, labour, and business. This means that stakeholders such as insurance companies and employers have a considerable interest in health-related data.

Many health services are built on the values of trust and confidentiality. But as more and more actors move into the health data space, these values are fading more and more out of focus.

A failure to protect health data can engage the rights to life, social protection, healthcare, work, and non-discrimination. Very concretely, it may deter individuals from seeking diagnosis or treatment, which in turn undermines efforts to prevent the spread of, say, a pandemic.

The access Home Office immigration officials are given to check entitlement to health services as part of the UK government’s “hostile environment” policy is a clear example of this, but even in “lower risk” settings, a person might think twice about getting medical help if the potential repercussions of sensitive health information ending up in unwanted hands are sufficiently grave.

FinTech

It is said that health cannot be bought, but with that wisdom in mind let us turn to the second conference theme, which is FinTech.

Here too, with the ever-increasing automation of financial services, it is not only the right to privacy that is under threat.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being. Access to credit helps marginalised communities exercise their economic, social, and cultural rights.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being

Muhammad Yunus, social entrepreneur and Nobel Peace Prize winner, has gone so far as saying that access to credit should be a human right in and of itself. “A homeless person should have the same right as a rich person to go to a bank and ask for a loan depending on what case he presents.”

In reality, automation in the financial sector often polices, discriminates, and excludes, thereby threatening the rights to non-discrimination, association, assembly, and expression; individuals may not want to associate with certain groups or express themselves in certain ways for fear of how it will impact their creditworthiness.

This policing, discrimination, and exclusion can also have an impact on the right to work, to an adequate standard of living, and the right to education.

Cathy O’Neil has noted that creditworthiness has become an “all-too-easy” stand in for other virtues. It is not just used as a proxy for responsibility and “smart decisions”, it is also a proxy for wealth. Wealth, in turn, is highly correlated with race.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture

Many of these “modern” algorithms make their decisions based on historic data and decision patterns, which has led to the coining of the term “weblining,” to show how existing discriminatory practices that were operationalised in the US in the 1930s with the practice of “redlining” to keep African American families from moving into white neighbourhoods, are now replicated in new technology.

Those who can afford to, can hire consultants or go live in certain neighbourhoods to boost their credit scores. In the meantime, those living in poverty are refused loans, often on a discriminatory basis, and even targeted because of their credit scores for pay-day-loans and other online advertisements that can plunge them further into poverty.

This illustrates that this is more than a question of privacy: it is a question of livelihood.

It doesn’t stop there. Credit scores are even used to make decisions about a person outside of the financial services sector, such as hiring or promoting individuals at work. Conversely, other proxies are increasingly being used as stand ins for creditworthiness. O’Neil has explained how this has contributed to a “dangerous poverty cycle”.

With datasets recording nearly every aspect of our lives, and these data points being relied on to make decisions on us as employees, consumers, and clients, we are being labelled as targets or dispensables. Our occupations, preoccupations, salaries, property values, and purchase histories result in us being labelled as “lazy,” “worthless,” “unreliable”, or a risk, and that label can carry across many different aspects of our lives. 

Smart cities

I will be brief about smart cities because I actually don’t think the question of making cities “smarter” through technology is worth having unless we are talking about the need to completely reinvent the urban design that has historically and systemically harmed people of colour, people living in poverty, people with disabilities, and marginalised groups.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion, as well as vague, unsubstantiated notions of “efficiency”.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion

Smart city initiatives in India have led to mass forced evictions, and in the United States the question should be asked if smart city design to “reduce neighbourhood crime” isn’t a euphemism for increased surveillance, with all the racialised policing that comes with it.

The really interesting question here is how we can use AI to help fix these inequalities and harms, but that is not usually the focus of many of the current smart city debates.

Why do we have this problem?

We just looked at some of the manifestations of the problem of colonised technology. So: what causes us to have this problem in the first place?

First, there is a common and incorrect assumption that technology is neutral. However, the apps, algorithms, and services we design ingrain choices made by its creators. It replicates their preferences, their perceptions of what the “average user” is like, what this average user would want or should want to do with the technology. Their design choices are based on the designer’s world view and therefore also mirrors it.

As someone said to me the other day: “an algorithm is just an opinion in code.” When those designers are predominantly male, privileged, able-bodied, cis-gender, and white, and their views and opinions are being encoded, this poses serious problems for the rest of us.

This relates to the second cause, which is that Silicon Valley has a notorious brogrammer problem. When you look at any graph reflecting the makeup of Silicon Valley, where most of our technology here in Europe comes from –– and this is a problem in and of itself, as technology developed from a white, Western perspective is deployed around the world  –  this is easily visible.

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases. Often to the point they literally become invisible on the graph because their numbers are so small. 

Analysis of 177 Silicon Valley companies by investigative journalism website Reveal showed that ten large technology companies in Silicon Valley did not employ a single black woman in 2018, three had no black employees at all, and six did not have a single female executive.

This should make it less surprising that, for example facial recognition software built by these companies is predominantly good at recognising white, male faces.

However, a predominantly male, white, and able-bodied workforce is not the only thing that factors into technological discrimination.

A third cause, which we already touched upon in the context of FinTech, is that technology is built and trained on data that can already reflect systemic bias or discrimination. If you then use those data to develop and train new software, it is not surprising that this software will be geared towards replicating those historical data. Technology based on data from a racist, sexist, classist, and ableist system, will provide outcomes that reflect that racism, sexism, classism, and ableism. Unless a conscious effort is made to get the system to make different choices, systems built on such data will replicate the historical preferences it has been fed.

Finally, we don’t work consistently with interdisciplinary design teams. Engineers will build technology to certain specifications, and those will have systemic biases baked into them; you can have all the non-male, non-white, non-ableist engineers you can think of, but it will not be up to them (alone) to solve the systemic problems.

If society is sexist, racist, and ableist, so will the AI it develops be. It is unhelpful to focus on the technology alone as it negates the political and societal systems in which it is developed and operates.

There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers

An interdisciplinary approach in developing AI systems is therefore crucial. There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers.

What can we do about it?

We obviously have a large-scale problem on our hands and the monster we created won’t easily be put back in its cage. There are, however, a number of things we can do, both in the shorter and longer term.

  • Push for moratoriums on new technologies until we understand their social impact, particularly on human rights. The call for a ban on the use of facial recognition technology has gained more traction following the Black Lives Matter protests, with several tech giants putting a hold on –– part of –– their products in this area. This should be a guiding principle: unless we know and understand in full what the human rights impact of new technology is, it should not be developed or used.

  • We also need to have the debate on where to draw the so-called red lines on where AI technology should not be used at all. This conversation needs to be people-centred; the individuals and communities whose rights are most likely to be violated by AI are those whose perspectives are most needed to make sure the red lines around AI are drawn in the right places.

  • When we analyse the impact of new technologies, this needs to be done from an intersectional perspective, not only on how it affects privacy and data protection. As we saw earlier, breaches of privacy are often at the root of other human rights violations. Companies and governments need to be held accountable for these violations and they need to work with multidisciplinary teams that include social scientists, academics, activists, campaigners, technologists, and others to prevent violations from occurring in the first place. This work also needs to be done closely and consistently with affected groups and individuals to understand the full extent of the impact of AI-driven technologies.

  • We need to push for enforcement and compliance with existing legislation protecting human rights. There often is a call for new regulation, but this seems to conveniently forget that we actually have existing international and national frameworks that set clear standards on how our human rights should be respected, protected, and fulfilled. This is also a healthy antidote to the fuzzy “ethics” debate companies would like us to have instead of focusing on how their practices can be made to adhere to human rights standards.

  • Finally, we need to not only decolonise the tech industry: we also need to decolonise the digital rights field. The individuals and institutions working to protect our human rights in the digital context clearly do not reflect the composition of our societies. This leaves us with a watchdog that has too many blind spots to properly serve its function for all the communities it is supposed to look out for.

At the outset, I referred to “diversity and inclusion” as a euphemism. I did that, because that is not enough to make the change that needs to happen.

Instead of focusing on token representation, we need to change the field on a structural level

Instead of focusing on token representation, which essentially treats the current status of the field as a pipeline problem, we need to change the field on a structural level; we need to change its systems and its power structures.

This is something that is fundamentally different from “including” those with disabilities, from   racialised groups, the LGBTQI+ community, and other marginalised groups in the existing, flawed ecosystem.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field? The task of re-imagining and rebuilding the digital rights field is clearly enormous. Especially since digital rights cover the scope of all human rights and therefore permeate all aspects of society, the field does not exist in isolation.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field?

We can therefore also not solve any of these issues in isolation either –– there are many moving parts, many of which will be beyond our reach to tackle, even if we at the Digital Freedom Fund are working with a wonderful partner in this, European Digital Rights (also known as EDRi).

But: we need to start somewhere, and we need to get the process started with urgency as technological developments will continue at a rapid pace and we need a proper watchdog to fight for our rights in the process.

So: what have we done so far.

We started earlier this year with a process of listening and learning. Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe.

Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe

We asked about their experience working on digital rights, what their experience has been working with digital rights organisations, and what a decolonised digital rights field might look like and what it might achieve.

We also started collecting and reading literature about decolonising technology and other fields, to start developing a possible joint vision we can work towards.

As these conversations continue, we are starting similar conversations with the digital rights field to learn what their experience has been working on racial and social justice issues, working with partners in this field, and also what their vision of a decolonised digital rights field might look like.

The next step is an online meeting in December of this year to (a) connect different stakeholders and (b) receive input on what the next step in the process, which we are referring to as a design phase, should look like. This design phase, which we hope to start next year, should be a multi-stakeholder effort to come to a proposal for a multi-year, robust programme to initiate a structural decolonising process for the field.

Much has changed since we first started talking about the need to decolonise the digital rights sector two years ago.

This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large

The recent international Black Lives Matter protests have done a lot to boost awareness about systemic racism. This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large.

The irony of our work now being of interest to many –– the media, policymakers, funders… –– now that it has been validated because the “white gaze” became captivated by racial justice protests amidst the boredom of a global lockdown, is also not lost on me.

That being said, the current mood does illustrate how necessary this work is, not only in the digital rights space, but everywhere in our society. And the more of these processes we can set in motion, the better a world we will be creating for all of us.

Protecting Children’s Digital Rights in Schools

By Nani Jansen Reventlow, 30th July 2020

Children sitting in a classroom

This post was written in association with The State of Data 2020, an event organised by defenddigitalme, who advocate for children’s privacy in data and digital rights. 

In recent years, cases have been decided across Europe that raise serious issues about the digital rights of children in schools.

Unsurprisingly, schools are not immune to the increased digitisation sweeping through our societies. Around Europe, many schools have already begun to introduce tools that collect and process students’ biometrics, such as their faces and fingerprints, as well as other personal, sensitive data.

It sounds worrying – and it is. Luckily, however, children’s personal data has special protection under the GDPR.

Processing children’s personal data in the education context isn’t entirely off limits under this regulation. It’s acceptable for schools to use such data if there is deemed a good reason for it: for example, if it’s in the interest of the child or the public. This could include monitoring immunisations, keeping attendance, or developing online learning platforms that further the education of the child.

But there are, of course, caveats. For one, the GDPR sets higher standards of justification when it comes to processing people’s unique, biometric data, given that it is an extremely sensitive and invasive form of data collection. Also, if schools don’t ensure their students’ data are completely secure, or if they collect more data than they actually need, they can end up breaching GDPR.

As we’ll see below, real life examples of such breaches have already begun to surface.

Taking Attendance through Facial Recognition

First we’ll go to Sweden, where the Secondary Education Board in the municipality of Skellefteå began using facial recognition in 2018 to monitor attendance at a secondary school.

In this case, 22 students were monitored by cameras as they entered the classroom every day. The purpose of this experiment was to see if automation could save time in registering students’ attendance. The school hoped that the time saved could then be used for teaching. The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

Similarly, in France in 2019, a regional authority launched an experimental security project in two high schools –– one in Marseille and one in Nice –– installing facial recognition gates at the entrance of the schools. The gates identified students using facial images, and also scanned for unidentified visitors. The students whose data would be processed had given their consent.

The outcome? Both the Swedish and French Data Protection Authorities found that using facial recognition technology to monitor students breached the GDPR, specifically Article 5 and Article 9.

Under Article 5, personal data processing must be “adequate, relevant, and limited” to what is necessary to carry out a specific purpose.

Article 9, on the other hand, makes clear that processing biometric data is permissible only under strict conditions. One of those conditions is “explicit consent” – something that both schools used to justify their experiments. However, due to the imbalance of power between the data subject –– the students –– and the data controller –– the school –– both data protection authorities rejected the fact that consent was given freely.

However, due to the imbalance of power between the the students and the school, both data protection authorities rejected the fact that consent was given freely

As well as this, both authorities found that facial recognition technology infringed upon the personal integrity and data protection rights of students to a degree that was disproportionate to the purposes being pursued.

The Swedish data protection authority actually went further, and found that, given how little we still know about the potential risks and consequences of deploying facial recognition, the school should have consulted with the authority for advice before using it.

Fingerprint Scanning for Student Meals

Another key case occurred in Poland, and concerned not facial recognition, but fingerprint collection.

Starting in 2015, Primary School No. 2 in Gdansk processed the fingerprints of 680 children as a method of verifying meal payment at the school canteen.

Once again, the school had obtained consent from the guardians of students whose biometric data was being processed. Four children opted out, and paid using an alternative method. However, these students had to wait at the back of the lunch queue until all the students using biometric data verification had passed through the canteen.

Just as the Swedish and French schools had done, the Polish school argued that it had obtained “explicit consent.” But, once again, the Polish data protection authority rejected this point. This time, the authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally – thereby placing pressure on students to consent.

The authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally

As well as this, the authority found, once again, that biometric data collection wasn’t necessary in this case, given that there were alternative, less invasive methods for students to verify their meal payments.

Unsecure Apps for Parents, Students and Teachers

Moving onto Norway, the Education Agency of the Municipality of Oslo launched an app in 2019 that allowed parents and students to send messages to school staff.

Soon after the app launched, Aftenposten, a widely read Norwegian news outlet, broke the news that the app contained a major security vulnerability. Apparently anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children in Oslo.

Anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children

The Norwegian data protection authority found that the Education Agency had breached the GDPR – Article 5.1(f) – by launching an app without properly testing it to ensure the information on it would be stored securely.

The authority also found that the Education Agency breached Article 32 of the GDPR, which requires data controllers to ensure a level of security appropriate to the risk. In light of the special vulnerability of children, the Education Agency did not take sufficient measures to ensure the confidentiality and integrity of its messaging system.

As a result of the findings, the data protection authority imposed a fine on the Education Agency. It also imposed a similar fine on the Municipality of Bergen, where data security insufficiencies exposed the personal data of students by allowing unauthorised users to access the school’s administrative system, as well as the digital learning platform where students’ classwork, evaluations, and personal data were stored.

Recurring Patterns

In considering whether a school violated the data protection rights of students, the data protection authorities of Sweden, France, Poland, and Norway all drew attention to the relationship between a school and its students. Schools are in a position of authority when they monitor students’ data, and students are in a position of dependency. The power imbalance in this relationship will most often invalidate consent as a lawful condition for biometric data processing.

As well as this, the particularly sensitive and vulnerable nature of children’s data processed in a school context –– often relating to issues such as health, identity and development – requires schools to protect students’ personal data with particularly secure technical measures.

Educational authorities carry out many tasks that require collecting personal data from students. Technologies, including biometric technologies like facial recognition and fingerprinting, present new opportunities for schools to expedite processes like attendance monitoring, school security, and lunch payment.

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency.

Whatever measures schools adopt, they must safeguard children’s rights and be necessary and proportionate to the purpose pursued. If there are alternative measures that do not require the processing of sensitive personal data, and those measures can achieve the same results, then the processing of sensitive personal data is not likely to be justified under the GDPR.