The Facebook Ruling that Set a Competition Law Precedent

By Maria Smith, 30th July 2020

Social media icons with 'Facebook' icon in focus

This article was co-authored by Maria Smith and Tatum Millet.

Late last month, Germany’s highest court ruled that Facebook had abused its market dominance to illegally harvest data about its users. The ruling upholds an earlier decision by the country’s antitrust watchdog, the Bundeskartellamt

The case presents an example of the role competition law can play in holding corporate actors accountable for business practices that violate digital rights.

Facebook’s terms and conditions are written to allow the company to collect almost unlimited amounts of user data from Facebook-owned, as well as third party, websites. German regulators successfully used a novel antitrust argument to show that Facebook had pressured users into making an all-or-nothing choice, forcing them to either submit to unlimited data collection or not use the site at all.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections

By taking advantage of its market dominance to push users into consenting to invasive data collection and combining policies, Facebook violated competition laws meant to protect consumers from exploitative abuse. The court’s interpretation of personal data collection as a form of “payment” for using Facebook is an important development in reframing competition law concepts to reflect the realities of digital markets.

Andreas Mundt, Germany’s top antitrust enforcer, applauded the court’s decision. “Data are an essential factor for economic strength, and a decisive criterion in assessing online market power,” Mr Mundt said. “Whenever data are collected and used in an unlawful way, it must be possible to intervene under antitrust law to avoid an abuse of market power.” 

Facebook must now alter its practices in Germany by allowing users to block the company from combining their data on Facebook with data about their activities on other websites and apps. Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

The practice struck down by German authorities –– combining users’ data from across millions of websites and apps –– is the very practice that has allowed Facebook to balloon into the advertising giant it is today. This case demonstrates how Facebook wielded its dominance in the social media market to deprive users of the ability to meaningfully consent to personal data processing.

The company’s unique access to the personal data of billions of users has allowed them to secure a stronghold over the market for targeted online advertising. As the finding shows, this position has allowed Facebook to exert great power over the digital economy and further stifle competition.   

In June, DFF convened a Competition Law Workshop with digital rights organisations from across Europe and the US to explore how anti-competitive practices could be challenged to defend digital rights. Participants identified instances in which potentially anti-competitive practices are playing out in the digital context, charting possible legal challenges to, among other issues, intermediaries abusing their market dominance.

The group also identified ways to strengthen the regulatory capacity of European bodies. In early June, the European Commission launched two public consultations to seek views on the Digital Services Act package and on a New Competition Tool

After the DFF workshop, a group of participants drafted a response to this consultation, urging the Commission to keep digital rights in focus when analysing the impact of the proposed regulatory tools. These participants note that “large online platforms not only act as economic gatekeepers, but also as ‘fundamental rights’ gatekeepers.”

At a time when personal, social, and political life increasingly plays out across online platforms, it is urgent that we find ways to ensure that regulators have the legal and political mechanisms needed to protect privacy, competition, and human rights. 

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law as regulators look for ways to reign in monopolistic practices.

Maria Smith is a 2L at Harvard Law School and a 2020 summer intern at the Digital Freedom Fund.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Image by Nordwood Themes on Unsplash

Protecting Children’s Digital Rights in Schools

By Nani Jansen Reventlow, 30th July 2020

Children sitting in a classroom

This post was written in association with The State of Data 2020, an event organised by defenddigitalme, who advocate for children’s privacy in data and digital rights. 

In recent years, cases have been decided across Europe that raise serious issues about the digital rights of children in schools.

Unsurprisingly, schools are not immune to the increased digitisation sweeping through our societies. Around Europe, many schools have already begun to introduce tools that collect and process students’ biometrics, such as their faces and fingerprints, as well as other personal, sensitive data.

It sounds worrying – and it is. Luckily, however, children’s personal data has special protection under the GDPR.

Processing children’s personal data in the education context isn’t entirely off limits under this regulation. It’s acceptable for schools to use such data if there is deemed a good reason for it: for example, if it’s in the interest of the child or the public. This could include monitoring immunisations, keeping attendance, or developing online learning platforms that further the education of the child.

But there are, of course, caveats. For one, the GDPR sets higher standards of justification when it comes to processing people’s unique, biometric data, given that it is an extremely sensitive and invasive form of data collection. Also, if schools don’t ensure their students’ data are completely secure, or if they collect more data than they actually need, they can end up breaching GDPR.

As we’ll see below, real life examples of such breaches have already begun to surface.

Taking Attendance through Facial Recognition

First we’ll go to Sweden, where the Secondary Education Board in the municipality of Skellefteå began using facial recognition in 2018 to monitor attendance at a secondary school.

In this case, 22 students were monitored by cameras as they entered the classroom every day. The purpose of this experiment was to see if automation could save time in registering students’ attendance. The school hoped that the time saved could then be used for teaching. The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

Similarly, in France in 2019, a regional authority launched an experimental security project in two high schools –– one in Marseille and one in Nice –– installing facial recognition gates at the entrance of the schools. The gates identified students using facial images, and also scanned for unidentified visitors. The students whose data would be processed had given their consent.

The outcome? Both the Swedish and French Data Protection Authorities found that using facial recognition technology to monitor students breached the GDPR, specifically Article 5 and Article 9.

Under Article 5, personal data processing must be “adequate, relevant, and limited” to what is necessary to carry out a specific purpose.

Article 9, on the other hand, makes clear that processing biometric data is permissible only under strict conditions. One of those conditions is “explicit consent” – something that both schools used to justify their experiments. However, due to the imbalance of power between the data subject –– the students –– and the data controller –– the school –– both data protection authorities rejected the fact that consent was given freely.

However, due to the imbalance of power between the the students and the school, both data protection authorities rejected the fact that consent was given freely

As well as this, both authorities found that facial recognition technology infringed upon the personal integrity and data protection rights of students to a degree that was disproportionate to the purposes being pursued.

The Swedish data protection authority actually went further, and found that, given how little we still know about the potential risks and consequences of deploying facial recognition, the school should have consulted with the authority for advice before using it.

Fingerprint Scanning for Student Meals

Another key case occurred in Poland, and concerned not facial recognition, but fingerprint collection.

Starting in 2015, Primary School No. 2 in Gdansk processed the fingerprints of 680 children as a method of verifying meal payment at the school canteen.

Once again, the school had obtained consent from the guardians of students whose biometric data was being processed. Four children opted out, and paid using an alternative method. However, these students had to wait at the back of the lunch queue until all the students using biometric data verification had passed through the canteen.

Just as the Swedish and French schools had done, the Polish school argued that it had obtained “explicit consent.” But, once again, the Polish data protection authority rejected this point. This time, the authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally – thereby placing pressure on students to consent.

The authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally

As well as this, the authority found, once again, that biometric data collection wasn’t necessary in this case, given that there were alternative, less invasive methods for students to verify their meal payments.

Unsecure Apps for Parents, Students and Teachers

Moving onto Norway, the Education Agency of the Municipality of Oslo launched an app in 2019 that allowed parents and students to send messages to school staff.

Soon after the app launched, Aftenposten, a widely read Norwegian news outlet, broke the news that the app contained a major security vulnerability. Apparently anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children in Oslo.

Anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children

The Norwegian data protection authority found that the Education Agency had breached the GDPR – Article 5.1(f) – by launching an app without properly testing it to ensure the information on it would be stored securely.

The authority also found that the Education Agency breached Article 32 of the GDPR, which requires data controllers to ensure a level of security appropriate to the risk. In light of the special vulnerability of children, the Education Agency did not take sufficient measures to ensure the confidentiality and integrity of its messaging system.

As a result of the findings, the data protection authority imposed a fine on the Education Agency. It also imposed a similar fine on the Municipality of Bergen, where data security insufficiencies exposed the personal data of students by allowing unauthorised users to access the school’s administrative system, as well as the digital learning platform where students’ classwork, evaluations, and personal data were stored.

Recurring Patterns

In considering whether a school violated the data protection rights of students, the data protection authorities of Sweden, France, Poland, and Norway all drew attention to the relationship between a school and its students. Schools are in a position of authority when they monitor students’ data, and students are in a position of dependency. The power imbalance in this relationship will most often invalidate consent as a lawful condition for biometric data processing.

As well as this, the particularly sensitive and vulnerable nature of children’s data processed in a school context –– often relating to issues such as health, identity and development – requires schools to protect students’ personal data with particularly secure technical measures.

Educational authorities carry out many tasks that require collecting personal data from students. Technologies, including biometric technologies like facial recognition and fingerprinting, present new opportunities for schools to expedite processes like attendance monitoring, school security, and lunch payment.

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency.

Whatever measures schools adopt, they must safeguard children’s rights and be necessary and proportionate to the purpose pursued. If there are alternative measures that do not require the processing of sensitive personal data, and those measures can achieve the same results, then the processing of sensitive personal data is not likely to be justified under the GDPR.

The Power of Class and Mass Action for Digital Rights

By Antoin O'Lachtnain, 27th July 2020

Dense crowd of people

As big online companies “move fast and break things,” there are beneficiaries and victims. No one denies the benefits of digitalisation, but seemingly beneficial technology has in many respects turned into a free-for-all assault on users’ attention and privacy.

This new social space, in which we interact, and the workplace, in which we earn a living, is often turned into a battle ground as a result. In the process, collateral damage is sustained by individuals’ data protection rights, their privacy and livelihoods. This damage is both at an individual and societal level. 

“Class actions” and “mass actions” are a way of bringing together large groups of people who have been hurt at an individual level, allowing them to band together to get redress.

The power of the action is the scale. The damage in each case is very small: some location data here, some information about religious affiliation there. Each piece on its own is almost harmless. The harm comes when it is combined with data from other sources, and with data about billions of other users, and sold or traded for a purpose like advertising.

The harm is like one of a thousand cuts. Each one could be survived, but the combination has a real effect. Even then the damage in each case is probably relatively small. But the impact overall is large, because platforms like Facebook, Gooogle and Amazon collect and use personal data about billions of users.

The harm is like one of a thousand cuts. Each one could be survived, but the combination has a real effect

As with any legal field, there is a multitude of jargon and technicalities, but it might help to explain the most important distinction between a “class action” and a “mass action.” In a “class action,” the plaintiffs in court and their lawyers represent everybody in the same situation as they are in, regardless of whether they have formally joined the action and regardless of whether they even know about it. This type of action emerged in the United States in the 1960s and became an important mechanism for redressing breaches of consumer rights, civil rights and environmental standards.

Since then, Australia has adopted “class actions” and some countries in Europe have adopted limited forms of “class actions” which allow consumer groups to take actions on behalf of a class of consumers. 

A “mass action” is a more general, less legally specific term. In a “mass action,” large numbers of people are represented in a single complaint, with the same or very similar facts and laws applying to all their situations. They may or may not represent a class in the sense of a “class action.” Nonetheless, a single complaint with thousands of complainants pursuing a shared goal has the potential to set an important and impactful legal precedent. 

Hitting Companies Where it Hurts

Damages, namely a payment to the complainant by the responsible party as compensation, are a critical issue. Not all actions will involve damages, but the prospect of damages will make the cause a much bigger concern for the digital company involved. 

The obvious benefit of successful class or mass actions is that they provide justice to individuals involved, especially if damages are payable. But the benefit is potentially much bigger.

The prospect of class and mass actions is going to be a major deterrent for major online companies, especially for the largest and most profitable of them

The prospect of class and mass actions is going to be a major deterrent for major online companies, especially for the largest and most profitable of them. These companies simply cannot afford to lose a series of cases like this, because it would inevitably result in a snowball of follow-on cases from victims in a similar situation. The company would have to make a financial provision for losses and potential losses through damages awards, and this would impact adversely on its stock price. Needless to mention that even if no damages were payable in one case, follow-on claims for damages would be a very real prospect.

In the medium to long term, the effect of these actions will be that the digital companies will have to avoid such ill effects by controlling their own behaviour, by having regard to the rights of their users, and by improving their data protection and other human rights practices to the point where they are able to stand over them and win in court.

How to Harness Mass Action

So, what does it take to mount a mass or class action?

The cause of action is very much the starting point. There needs to be wrongdoing, and ideally damages need to be due as a result of that wrongdoing.

One example of such a wrong is breach of data protection rights. The GDPR provides specifically that damages are payable where data protection rights are breached. Equally if a group of people (for example, an ethnic group) suffered harm as a result of something that happened online and which could have been prevented, then there might well be a case that could be brought as a mass or class action.

One example of such a wrong is breach of data protection rights. The GDPR provides specifically that damages are payable where data protection rights are breached

The practicalities of a class or mass action are manifold. Firstly, there needs to be a clear case that is applicable to large numbers, preferably millions or tens of millions of people.

Secondly, the action needs to be publicised to get an adequate pool of claimants. It needs public awareness and complainants need to sign up to participate. Each of these individual clients needs to be dealt with and their particular case may need to be managed.

Thirdly, the action needs the right legal representation. These cases will break new ground and will likely be complex, and this will place great demands on the legal team.

Fourthly, the case needs to be funded because a case of this type will be expensive to undertake, due to the complexity of the subject matter and because the online company will fight tooth and nail to avoid the precedent a loss would set. 

Maybe it goes without saying that the case needs an appropriate legal form. The laws of the jurisdiction need to provide strong enough protections to allow you a prospect to win the case, and they also need to provide for a mechanism for you to operate your class or mass action.

By way of example, the GDPR provides for non-profit organisations to take cases on behalf of a group of affected individuals, but only in countries that have opted into this particular provision in their national law. The case has to be in the “right” jurisdiction, taking into account where the target company has a presence and where the complainants are located. The mechanism chosen has to provide adequate redress.

The judicial forum chosen also matters. Typically, data protection authorities do not have the power to award damages to wronged data subjects. The complainants must usually bring their complaint to the courts of the country to collect such damages.

There are many obstacles to class and mass actions and many pitfalls to be avoided. But class and mass actions have the potential to make a big practical difference in relation to digital rights.

Online companies move famously fast and their bad behaviour affects billions of people in small but insidious ways

Online companies move famously fast and their bad behaviour affects billions of people in small but insidious ways. Class and mass actions can be a way for the legal system to provide a retrospective judgment on these companies’ actions, and orders which result in payment of money as damages give these judgments strong “teeth.” Companies cannot take these judgments lightly, because they can no longer just think about what they can get away with in the short term.

Class and mass actions are a way to send a message to digital companies (and their investors) that they will ultimately need to face up to and pay for the negative consequences of their profit-making activities.

Antoin O Lachtnain is a Director of Digital Rights Ireland, and is Managing Director of ex muris, a firm providing data services to the energy industries.

Image by Chuttersnap on Unsplash