How To Communicate Digital Competition Law Issues

By Jonathan Todd, 2nd March 2020

Phone with many apps

When communicating competition law issues to the public, including those concerning new technologies, there are four key considerations to keep in mind.

  1. Keep it simple. Competition policy may sound very complicated, but the essential principles are not.

  2. Find the right audience. Make your messages relevant to as many people as possible.

  3. Pay attention to timing. It is crucially important not to give advanced warning to those that may be in breach of competition rules so as to prevent them from destroying evidence before an investigation is launched, and

  4. Communicate effectively.  If you do not communicate effectively about your case, vested interests will fill the vacuum and try to influence the outcome.

Keep it Simple

It can be very complicated to pursue a successful competition case. However, when it comes to communicating competition policy, it is usually very simple to explain.

Essentially, competition policy seeks to ensure that markets operate in a fair manner, and benefit both consumers and businesses. In particular, competition policy seeks to maximise customers’ choice of innovative products and services at competitive prices, and to also protect consumers from abusive behaviour of companies.

Competition policy seeks to ensure that markets operate in a fair manner

The Commission and national competition authorities in EU Member States can, and do, use their very considerable enforcement powers to tackle anti-competitive behaviour by companies. Often such behaviour includes agreements to fix prices, charging excessive prices, unfairly squeezing out competitors from a market or imposing unfair terms and conditions on their customers. The Commission and national competition authorities ensure mergers and takeovers do not reduce effective competition in the EU’s Single Market to the detriment of customers.

Find the Right Audience

The wider the audience you reach, the more influence you will have in terms of gaining public support for your actions.

To reach a wide audience, the mass media, especially TV, radio and internet media, remain the single most effective channels to get messages across to a large number of people. Don’t worry too much about specialised media because they will take an interest in any case.

Social media, notably Twitter, can be a useful way to communicate with journalists and specialists. However, social media requires a huge investment before it becomes a viable channel to communicate with a significant number of people because to be successful you need to be interactive, which is very resource intensive.

It’s important to bear in mind, however, that journalists, and more to the point their editors, are not under any obligation to run your stories just because you think they are hugely important.

They will only pick up stories if they think their audiences will be interested in them: stories that are relevant to their audiences, stories that will make people want to watch or listen to their programmes or buy their newspapers or click on their websites. They want stories that pass the “so what?” test and resonate with people.

They want stories that pass the “so what?” test and resonate with people

Journalists and editors will pick up stories if you can give them credible examples of how people and businesses may be suffering as a result of anti-competitive practices that infringe their digital rights to, for example, privacy or freedom of expression. Put yourselves in the shoes of the people that may be suffering from these threats to their digital freedoms and explain the situation from their perspective.

Bear in mind that journalists want stories in clear language that their audiences (and journalists themselves) can easily understand without being competition law or digital rights experts. It does not matter how complicated a case is: the potential negative impact on people can normally be explained in very simple terms.

For example, in the antitrust cases against Google concerning its comparison shopping service and its Android mobile operating system and Amazon e-books, the Commission explained the harm done in terms of people having less choice and being deprived of potentially more useful innovative products and services.

…people’s data is being used without sufficient safeguards, exposing people to fraud, unwanted advertising or unauthorised surveillance

In the case of antitrust cases concerning privacy, one could perhaps explain that people’s data is being used without sufficient safeguards, exposing people to fraud, unwanted advertising or unauthorised surveillance. Or some categories of people may be losing out as a result of discrimination arising from algorithmic decision-making or artificial intelligence.

Pay Attention to Timing

The timing of communication is crucial so as not to damage the chances of a successful investigation.

For example, if you publicise a complaint you’ve made before the Commission or other competition authority can conduct raids on the companies or organisations involved, or open a formal investigation, there is a very high risk that they will destroy any evidence. Companies may be motivated to prevent the investigation from proceeding so as to not result in an infringement of competition law. The time to begin communicating, therefore, is at the moment the competition authority has made the investigation public, not before. It is simply not worth getting publicity for a good story if the price is sabotaging the substance of an investigation.

Communicate Effectively

If you do not communicate effectively about your cases, vested interests will fill the vacuum. And in the digital world, vested interests can be very rich, powerful and vocal.

If you do not communicate effectively about your cases, vested interests will fill the vacuum

I can give you a couple of examples from my experience. Before the Commission adopted the Google Android decision, Google placed a huge number full page ads with beautiful young people claiming Google’s Android was the most fantastic tool that has ever existed for app developers, allowing them to, for example, tackle world hunger and help people with disabilities. And before the Commission took antitrust decisions against Qualcomm (for paying customers not to buy from rivals and for predatory pricing to put a rival out of business), adverts appeared on Brussels buses explaining that Qualcomm, a company most people had never heard of, was improving people’s lives.

Effective communication is therefore important to counter, proactively or reactively, the arguments of vested interests that are likely to seek to undermine or misrepresent any antitrust investigation against them.

Jonathan Todd is Special Adviser to European consumer organisation BEUC, former European Commission spokesman and former journalist

The Importance of Antitrust in the Digital Economy

By Liza Lovdahl Gormsen, 2nd March 2020

Figure car on Monopoly game board

Why is antitrust so important for the digital economy? Because the digital economy is ruled by data, including personal data.

The latter relates to privacy. Privacy is not necessarily in and of itself important for competition regulators, but data is. Data can act as a barrier to entry, but can also be used to foreclose competitors’ access to market and leveraging market power from one market to abuse such market power in another market.

These issues go straight to the core of antitrust. It is extremely important to understand how data is collected, how it is used and combined, and how it is it monitored. Data gives firms a competitive advantage in almost all markets.

Many of the big tech platforms operate with “take it or leave it” terms and conditions, which allow them to collect both first party data and third party data. A combination of first and third party data can be used to create super-profiles on users and surveil users online.

A combination of first and third party data can be used to create super-profiles on users and surveil users online

What is abusive is not the combination of data per se, but imposing data collection and combination preconditions to users seeking to access the service of a given platform. This not only violates users’ fundamental human rights in terms of right to privacy, but, in circumstances where the platform holds a dominant position, it is also an antitrust violation in the form of a non-price exploitation and exclusion.

Due to the serious underenforcement by privacy and data protection regulators, as well as by antitrust enforcers, certain tech companies have been allowed to develop into monopolies.

The ongoing global debate on antitrust law can hopefully ensure that the pendulum swings to the right level of enforcement – which can only be more than it is now where there is regulatory stagnation. This, however, requires antitrust enforcers and data protection authorities co-operate to reign in tech companies.

Certain tech companies have been allowed to develop into monopolies

A Question of Consumer Behaviour

When it comes to the digital economy, it is important to ask “what kind of society do we want to live in?”

Our society is slowly evolving into a dystopia where current conceptions of privacy are completely lost. Solely blaming the big tech companies would be unjust, as users of the platforms have been part of the problem by allowing their personal data to be eroded: for example, by using technology to monitor kids on their mobile phones or track where they are, and by constantly uploading photos of one another on the internet.

One serious problem in the digital economy is what is referred to as “behavioural surplus”. This means that the tech platforms know what users want before they know it themselves. This means that users can easily be manipulated and the platforms have become very good at convincing them to part ways with their personal data in return for a service. For example, users can easily use alternative search engines to Google such as Bing and DuckDuckGo, but they tend not to do so.

Our society is slowly evolving into a dystopia where current conceptions of privacy are completely lost

Users voluntarily harm themselves by giving away too much personal data. For this reason, we need a change in consumer behaviour. But this will likely take a very long time. In the meantime, it is clear that competition authorities need to work with privacy and data protection regulators.

Antitrust Enforcement Law in the Digital Economy

Creating a framework of rules for digital platforms was recently recognised by European Competition Commissioner Vestager, who has stated that we need to enforce the competition rules firmly, to stop digital platforms using their power to deny their rivals a chance to compete.

There is no doubt that enforcement in the digital space has fired up over the last couple of years. To be fair to the regulators, when the internet and digital economy were first emerging in the 1990s and early 2000s, it was unclear what shape they would take. The explosive growth and innovation in the online economy in that period appeared to validate the idea that markets were best left to themselves.

However, allowing private power to go unregulated created three problems: (1) monopolies in certain sectors, (2) surveillance and (3) disinformation – all three devastating for digital rights.

If the internet had emerged in another period, it might have looked very different. But the online economy has developed in an era when we have three chief means of keeping corporate power in check: antitrust, economic regulation and public ownership. That said, these tools have not been used to protect citizens’ digital rights.

Concerns expressed about antitrust enforcement in digital markets may at first look new. But with regard to leveraging harm-based theories applied in recent and ongoing enforcement against Google and Amazon, they correspond to well-known “traditional” theories of harm. These include abusive tying, leveraging, discrimination and refusal to supply. The notable exception to this, which is truly novel (at least in a European context), is the use of antitrust to pursue exploitative non-pricing abuses. This can be seen in the German Facebook case, which concerned data exploitation caused by the company’s terms and conditions.

The Need to Address Business Models of Tech Companies

Big tech companies are often referred to as “GAFA”, “FAANG”, or some other acronym. This is not ideal, as they have very different business models. While Facebook and Google are data-driven businesses dependent on advertising revenue, Apple and Microsoft generate their revenues from hardware and software products. Amazon, on the other hand, operates as a retailer and a marketplace, which is driven by taking a share of sales revenues. Thus, it is important to distinguish between them as the business models of Google and Facebook are distinct from that of Microsoft and they accumulate power in very different ways.

…allegations of anticompetitive behaviour through data use (or misuse) strike at the heart of their business model

These different business models, and therefore financial incentives, are critical to both the analysis of competitive effects and remedy design. This is especially true for Google and Facebook, where allegations of anticompetitive behaviour through data use (or misuse) strike at the heart of their business model. This represents a clear paradigm shift for antitrust enforcement, which has traditionally focused on conduct that is ancillary to, rather than at the core of, the business model of investigated businesses. Imposing fines do not appear to have much deterrent effect unless they are really high, so, when intervening in these markets, enforcers need to impose remedies such as preventing Facebook and Google from combining first and third party data.

This demands careful consideration to be given to what remedies are appropriate, in circumstances where there is the real potential to undermine business models which, notwithstanding concerns raised by authorities, are generally recognised as delivering significant consumer benefits.

Liza Lovdahl Gormsen is Senior Research Fellow in Competition Law and Director of the Competition Law Forum at the British Institute of International and Comparative Law and a Board Member at the Open Markets Institute.

Photo by Suzy Hazelwood from Pexels

Take #3: Building a Global, Inclusive Digital Rights Movement

By Nani Jansen Reventlow, 25th February 2020

Last week, DFF’s annual strategy meeting came back with a bang. Our third meeting was our biggest to date, and we were fortunate enough to be joined by members old and new from around the world. Attendees hailed from Argentina, the UK, Estonia, Serbia, Ireland, Bulgaria, Hungary, the US, the Netherlands, South Africa and beyond.

In three days of working sessions and consultations, we dove right to the heart of digital rights: from ongoing questions around AI and algorithms to emerging conversations, such as the field’s parallels with the climate struggle and labour rights.

Despite the gravity of challenges facing human rights in the current era, the experience of coming together to brainstorm and discuss ways forward was an inspiring one. By the time we’d wrapped up, the walls of our lovely venue in Village Berlin were plastered floor to ceiling in rainbow sticky notes that promised future collaboration on projects.

Continuing Conversations

In digital rights, some conversations crop up over and over. We discussed at length the rise of facial recognition technology use by states, honing in on cases stretching from Europe to China to Latin America. We discussed the smart-video surveillance system currently being rolled out in Belgrade, Serbia, while also hearing details about the evolving situation in the UK, where facial recognition has been permanently deployed by police in some regions.

The subject of algorithms and algorithmic decision-making were also omnipresent: we heard, for example, about a case being fought in Spain to demand transparency in the algorithms being used by public authorities. Then, in a rich debate about filters, blocking and private censorship, we discussed potential solutions, ranging from reforming the AdTech business model, to platforms requiring consent from users for filters.

On Friday, we hosted a focused consultation session on AI and human rights, and how we can effectively work on litigation in this area. We asked questions including: what value can be added by technologists in this kind of litigation? Where are the knowledge gaps when it comes to AI/machine learning and the law? 

Emerging Perspectives

There were fresh and new perspectives shared as well. Climate change was a hot topic: with the environmental crisis at tipping point, the junction at which climate issues and digital rights meet is hard to avoid. One conversation looked at how digital rights activists can borrow strategies from the climate change struggle, while another focused on the intersection between the two fields – including the targeted surveillance of climate activists, and the monitoring of energy consumption through smart meters.

The digital welfare state also proved itself an inescapable, and rapidly escalating, issue. We looked at the exponential digitisation of social protection provision, and asked ourselves what tools or strategies we can adopt to challenge the technology that monitors, profiles and punishes one of society’s most vulnerable groups: welfare applicants. On Friday we hosted a fruitful in-depth consultation on developing a litigation strategy to tackle this rising problem. We tried to conceptualise and define the issue, while also mapping stakeholders already working in the area.

Zooming Out

As well as tackling digital rights’ challenges old and new, we took time to zoom out and consider the broader power structures at play. In light of DFF’s recent decision to focus on decolonising the field of digital rights, we discussed concrete steps for making that a reality – and, crucially, why it matters. Ideas for effecting change ranged from changing the way we write job specs when hiring new candidates to ensuring that we create space for discussions around decolonisation in the workplace.

Labour rights were also high on the agenda, and we explored the issue of collective bargaining, a particularly pertinent issue in the gig economy. We also sought to address the working conditions of content moderators, who often work in extremely challenging circumstances.

Against the backdrop of these profoundly difficult human rights challenges, one topic resonated deeply in the room: burn-out. It’s no secret that work in the field can be mentally and emotionally taxing, and it was refreshing to see the prioritising of individual well-being and mental health.

Safe to say, we were left feeling inspired and galvanised. At DFF, we’ll be striving to harness this momentum: we’ll be organising follow-up focus meetings and running a blog series featuring new ideas shared at the meeting. The invaluable knowledge gained will inform and lead our work going forward – so watch this space.