Legal Pathways for Platform Accountability in Europe

Practical Guide

DFF-PlatformAccountability-1_page-0001

Graphic design by Marea Zan

Illustrations by Nippun Bhalla

Legal Pathways for Platform Accountability in Europe

Practical Guide

This guidebook flags platform accountability as a domain of action that requires the bringing together of legal, technological, and policy expertise with social movements, providing a context for effectively resisting platform power. Its purpose is to give litigators ideas on key legal pathways available for platform accountability under EU legal regimes. With this guidebook, we are not trying to exhaustively cover each pathway’s possibilities for holding platforms accountable. Rather, we want to focus on how specific aspects of each pathway hold potential towards that goal. The question this guidebook aims to answer is as follows: How will civil society, an individual, and/or a community benefit from utilising the available legal pathway to hold platforms accountable?

Several regulatory instruments have been created to counter and resist the harms caused by platforms. The legal issues raised are several: platforms have systematically violated data protection and privacy rights of their users, they are challenging the limits of consumer protection, they abuse their power over online speech through content moderation, they are the boss at a distance for gig workers who are all subject to exploitation, and they categorise, profile, and discriminate against groups of people. These issues, along with the recognition that platforms perpetuate rather than address longstanding and systemic societal inequalities, led us to focus on the following pathways:

Click on each tab to explore different legal pathways.  

Digital Services Act

By Dorota Głowacka

The core objective of this legal pathway is to tackle the risks and potential harms associated with online intermediaries operating in the EU while reinforcing user rights, ultimately creating a safer digital environment where fundamental rights are effectively protected. It primarily addresses the spread of illegal content on online platforms and their opaque, arbitrary moderation, tackling both excessive removals and failures to take down unlawful material. 

The Digital Services Act (DSA) is a regulatory framework that establishes the responsibilities of online intermediaries operating in the EU, such as social media platforms and e-commerce marketplaces. Its core objective is to tackle the risks and potential harms associated with these services while reinforcing user rights, ultimately creating a safer digital environment where fundamental rights are effectively protected.

The DSA sets out specific requirements for different types of intermediaries according to their roles, size, and impact on the online ecosystem. The most stringent rules apply to major platforms and search engines with over 45 million users in the EU. These entities, classified as ‘Very Large Online Platforms’ (VLOPs) and ‘Very Large Online Search Engines’ (VLOSEs), include, among others, Facebook, Instagram, Tiktok, YouTube, and X (see the full list of designated VLOPs and VLOSEs).

The DSA primarily addresses the spread of illegal content on online platforms and their opaque, arbitrary moderation, tackling both excessive removals and failures to take down unlawful material. Additionally, the Regulation targets platforms’ deceptive design features, imposes certain restrictions on tracking-based advertising, introduces some rules for recommender systems, strengthens protections for minors, and enhances consumer safeguards against fraudulent sellers and unsafe products online.

Enforcement of the DSA is carried out by both national regulatory bodies and the European Commission. The Regulation became fully effective on 17 February 2024.

There are several mechanisms to seek justice in the digital space under the DSA.

Complaints to Digital Services Coordinators

Digital Services Coordinators (DSCs) are national authorities designated by each EU Member State to oversee the implementation and enforcement of the DSA. Their task is to ensure that online intermediaries adhere to the rules set forth in the Regulation. If a user believes that any online intermediary, including VLOPs and VLOSEs, has failed to comply with their obligations under the DSA, they can submit a complaint with the DSC in their own country (Article 53). 

DSCs are primarily responsible for managing complaints related to online intermediaries operating within their jurisdiction. This typically entails addressing issues concerning service providers established in their respective countries (for the majority of VLOPs in the EU, this is Ireland). If a complaint pertains to a hosting provider or platform based in another country, the DSC conducts a preliminary evaluation and forwards it to the relevant DSC in that country, which then takes over and handles the case.

If a complaint concerns VLOPs or VLOSEs, the European Commission may step in as it may fall under its exclusive authority. Additionally, the Commission works alongside Member States to oversee VLOPs’ and VLOSEs’ compliance with broader DSA regulations, applicable also to smaller platforms. DSCs can intervene only if the Commission has not already taken action on the alleged infringement.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.Lorem ipsum dolor sit amet consectetur adipiscing elit dolor

Mechanisms dedicated to contest platform content moderation decisions

 

  1. Internal complaint-handling mechanisms

The DSA requires platforms to establish internal complaint-handling systems, accessible both in cases where the platform has taken a moderation measure and where it has declined to do so (Article 20). The system allows users to challenge content removals, visibility restrictions, account suspensions or terminations, and limitations on monetisation, whether based on illegal content or violations of the platform’s terms and conditions. Additionally, the system is available to anyone submitting a notice under Article 16 reporting illegal content that the platform has chosen to disregard (i.e., decided to leave the content unchanged).

The DSA sets forth some basic conditions that such a complaint-handling system should meet. It must be easily accessible, user-friendly, and available electronically free of charge for at least six months after the content moderation decision. The system should enable users to submit precise and well-substantiated complaints, ensuring platforms review them fairly, promptly, and without discrimination. Platforms cannot rely solely on automated means to process complaints. If a complaint is justified, the platform must reverse their original decision—e.g., by removing content that was wrongly kept or restoring content that was mistakenly deleted.

2. Out-of-court dispute resolution

The DSA also grants users the right to appeal externally to out-of-court dispute settlement bodies (ODS bodies) (Article 21). These independent, mostly private entities can be established under the DSA and must be certified by DSCs within their respective jurisdictions. Their main role is to help resolve content moderation disputes between users and online platforms. Both users affected by moderation decisions (such as content removal), and those whose reports of illegal content were dismissed, can seek redress through this process. The system is also available to users who could not resolve their issue through the platform’s internal complaint-handling mechanism (under Article 20). However, the DSA does not require users to exhaust the platform’s internal appeals process before turning to an ODS body.

ODS bodies aim to provide a faster and more affordable alternative to traditional courts for resolving content moderation disputes. In principle, they are expected to issue decisions within 90 days. Their recommendations may include reinstating content, reversing account suspensions, or removing content, depending on the case. The outcomes of dispute resolution processes are, however, not binding for platforms, meaning platforms are not legally required to comply. Platforms are nonetheless expected to engage in good faith. 

Users may select any certified ODS body that suits their needs, considering language compatibility, platform coverage, or specialisation in specific disputes. Users should also consider costs–while some ODS bodies offer free services, others charge a nominal fee. Generally, platforms cover dispute resolution costs, even when the user loses. However, if the user has clearly acted in bad faith, they may be required to reimburse the platform’s ODS fee or other expenses related to the dispute resolution process. A list of active ODS bodies is published by the European Commission.

Judicial redress

Judicial redress remains predominantly governed by national legislation. The DSA affirms that its provisions do not limit the possibility of initiating court proceedings under domestic laws at any stage. In particular, users can pursue judicial redress to challenge platforms’ moderation decisions, including when dissatisfied with outcomes of internal or out-of-court mechanisms (Article 21(1)).

Furthermore, the DSA explicitly empowers users to seek compensation in national courts for damages caused by intermediaries’ non-compliance (Article 54). 

Several civil society organisations have already submitted complaints, mainly under Article 53, highlighting alleged DSA violations. The majority of the complaints address issues related to content moderation systems and complaint-handling processes on various VLOPs and smaller platforms. The subjects of the complaints include difficulties in reporting illegal content, non-transparent design of notice channels, long processing times, insufficient statement of reasons provided by platforms when removing content, or inability to contact platforms. Furthermore, complaints concern manipulative design and recommender systems, such as the lack of accessible non-profiled feed options.

An overview of these complaints and their progress, along with other cases initiated by DSCs and the European Commission, can be tracked through the Digital Services Coordinators Database run by European Digital Rights (EDRi). As of the time of writing, most complaints remain pending. However, in some instances, the mere filing of a complaint has led companies to comply with the DSA without requiring lengthy legal proceedings.

The DSA has been incorporated into the annex of the Representative Actions Directive (2020/1828), allowing qualified entities–including non-profit organisations that meet specific criteria–to initiate representative actions on behalf of affected users (Article 90). This establishes a legal pathway for collective actions challenging DSA violations.

Yes. For example, the DSA’s content moderation framework (Articles 14, 17, 20, and 21), which requires platforms to implement certain standards and procedures in their moderation processes, limits platforms’ ability to make arbitrary and discriminatory decisions on third-party content. It can thus help protect freedom of expression, the right to privacy, or the right to non-discrimination. These safeguards are particularly valuable for marginalised groups who often face higher rates of content policing or suppression, and who are vulnerable to hate speech and cyberviolence on online platforms.

Furthermore, VLOPs and VLOSEs are required to assess and mitigate systemic risks that their services pose to, inter alia, fundamental rights (Article 34 and 35). When appropriately implemented, these provisions can help counter the harmful effects of content moderation and advertising or recommender systems. These harmful effects can contribute to fundamental rights violations including the exploitation of personal data, discriminatory content personalisation, amplification of hateful material, suppression of certain voices online, and threats that stem from undermining the pluralism of the media and public discourse. However, compliance with the risk assessment framework relies primarily on regulatory oversight by the European Commission, and translating these obligations into legally enforceable individual rights may be challenging.

A significant advancement in the DSA, which can be particularly beneficial for marginalised and vulnerable groups, is the provision allowing users to mandate a non-profit body to exercise their rights on their behalf (Article 86). Eligible bodies include consumer protection groups and civil society organisations dedicated to anti-discrimination or protection of digital rights, provided they meet criteria laid out in Article 86(1).

The non-profit body can act on behalf of a user (or groups of users) primarily by filing complaints with DSCs about DSA violations, submitting content moderation disputes to the ODS bodies, or by representing users in internal complaint-handling mechanisms under Article 20. The DSA mandates platforms to take technical and organisational steps to ensure complaints from these organisations are handled promptly and with priority (Article 86(2)).

The enforcement regime under the DSA is relatively new. Many DSCs are still in the process of establishing their enforcement mechanisms. As a result, uncertainty persists regarding how these mechanisms will function in practice. Enforcement actions may be inconsistent or delayed, making it difficult for users to have their complaints addressed efficiently. 

Judicial proceedings under the DSA, like other court cases, can be costly and time-consuming, potentially limiting access to redress. Moreover, the DSA lacks specific jurisdictional rules, requiring claimants in VLOP-related cross-border cases to rely on general EU and national legal frameworks, which can be complex and, in some instances, hinder effective access to justice.

Submitting a case to an ODS body is much more affordable and a faster option for users to contest platform moderation decisions, when compared with court litigation. However, since the outcomes of the procedure are not binding, their enforcement remains somewhat uncertain. 

The DSA grants the right to lodge a complaint with the DSC or claim compensation under Article 54 to ‘recipients of the service’–that is, any natural or legal person who uses a given intermediary service–located and established in an EU Member State. This means that third parties whose rights have been impacted–for instance, as a result of illegal content posted on a platform–will not be eligible to access those redress mechanisms (however, they may still have other legal remedies available).

Users can submit complaints to DSCs directly, without the need for legal representation. In some cases, they may find legal assistance beneficial in structuring their arguments more effectively. As already mentioned, the DSA explicitly allows users to mandate a non-profit organisation to exercise their rights on their behalf (Article 86).

The DSA does not set out specific procedural requirements for submitting complaints to DSCs (though some additional rules may be introduced in the respective national legislations implementing the DSA). At a minimum, a complaint should clearly identify the DSA violation and include any available evidence supporting the claim (see more on this below).

To seek compensation under Article 54, users must demonstrate that the alleged DSA violation directly caused the damage they suffered (i.e., it would not have occurred had the online intermediary fulfilled its obligations under the DSA). The claim must be pursued in accordance with applicable national law.

To increase the chances of success, any complaints under the DSA should be as well-substantiated as possible. The type of evidence required will depend on the specific case. For instance, in disputes related to content moderation, it is essential to collect screenshots of any notifications from platforms (e.g., removal notices, appeal confirmation messages) and user submission forms, as well as emails and other written communications with platforms.

Collecting evidence, particularly in cases involving algorithmic systems or suspected content suppression without notification, can be difficult, as users typically lack access to critical platform data. Information that can be used to support such claims includes comparing engagement levels (likes, comments, shares, views) before and after suspected shadowbanning, or checking whether specific posts or accounts appear in search results when using relevant hashtags or keywords. In some instances, more advanced data science techniques may help, such as adversarial audits (involving scrapping or experiments with sock puppet accounts), which can shed more light on issues like algorithmic bias.

Furthermore, certain transparency and reporting obligations under the DSA can serve as valuable tools for evidence collection. These include transparency and risk assessment reports that VLOPs are required to publish, as well as resources such as the DSA Transparency Database, where platforms submit statements of reasons explaining why content was removed or restricted.

Over time, the collection of relevant evidence may become easier with the support of the data access framework outlined in Article 40 DSA. This provision specifically recognises civil society organisations among the entities eligible to access certain data from VLOPs and VLOSEs for the purpose of conducting scientific research.

AI Act

By Laura Lazaro Cabrera

AI is being increasingly incorporated into a range of products and services, and platforms are no exception. The AI Act specifically recognises the risks inherent in the use of AI and it prohibits AI systems that may deliberately or accidentally manipulate, that distort the behaviour of individuals or groups, and that lead, or are likely to lead, to harm.

AI is being increasingly incorporated into a range of products and services, and platforms are no exception. Irrespective of the service provided by online platforms, they hold vast troves of end-user data–both declared and inferred–allowing them unique insight into individuals’ personality traits, preferences, and behavioural patterns. This unparalleled window into users’ online behaviour acts as both an incentive and enabler of AI adoption. Through AI, platforms can provide hyper-personalised services, ranging from tailored advertisements and dynamic pricing to chatbot companions mimicking human speech and AI agents assisting with the execution of tasks. The use of these tools raises significant concerns due to their opaque functioning and the heightened, unchecked risk of manipulation of end users.

The Artificial Intelligence Act (AI Act) specifically recognises these risks. It prohibits AI systems that may deliberately or accidentally manipulate, that distort the behaviour of individuals or groups, and that lead, or are likely to lead, to harm.

Owing to its origins in product safety legislation, the AI Act takes a transversal approach to AI regulation, applying to a wide range of entities across the private and public sectors regardless of their size, prominence, or influence. Crucially, the AI Act’s explicit connection and multiple references to existing product safety legislation creates an unprecedented opportunity for public interest advocates to tap into longstanding product market surveillance mechanisms focused on upholding high health and safety standards.

While the AI Act does not specifically target or single out online platforms, they are indirectly covered as providers or deployers of AI, respectively understood as entities developing AI systems or models, and entities using AI systems. In other words, online platforms will need to comply with AI Act obligations if they either develop an AI system in-house or use an AI system developed by someone else, as long as the AI system falls in one of the AI Act’s recognised risk categories.

 The risk category attributed to a particular type of AI system will depend on the specific circumstances, intention, and effect of the use of that AI system. For example, the AI Act attributes an unacceptable level of risk to–and therefore prohibits–two practices which could reasonably be deployed by online platforms, and which rely on harmful manipulation. Firstly, the AI Act prohibits AI systems which deploy subliminal, purposefully manipulative, or deceptive techniques. To fall under the prohibition, the objective or effect of the technique must be to materially distort a person’s behaviour, causing them to take a decision they would not otherwise have taken and causing–or being reasonably likely to cause–them or others harm. According to the Guidelines on Prohibited AI Practices published by the European Commission, this prohibition could capture a chatbot or AI-generated content that presents false or misleading information in ways that aim or have the effect of deceiving individuals and distorting their behaviour. The prohibition would apply if the distortion in behaviour would not have happened but for the individual’s interaction with the AI system or the deceptive AI-generated content, in particular if the role of AI has not been disclosed. Secondly, the AI Act prohibits AI systems which exploit the vulnerabilities of a person or group due to their age, disability, or socio-economic situation, where the objective or effect is to materially distort their behaviour, causing or being reasonably likely to cause harm to them or others.

Placing in the EU market or using an AI system intended to operate (or effectively operating) in the manner described above is prohibited, attracting fines of up to 35 million euros or–if the responsible entity is an undertaking–7% of total global turnover, whichever is higher.

The entities responsible for identifying prohibited practices and fining the offending entities are market surveillance authorities (MSAs), key actors to leverage as part of this pathway. Under the framework created by product safety regulation 2019/1020, MSAs are empowered to carry out investigations to assess a product’s compliance with EU legislation. MSAs can also propose corrective actions, including the withdrawal or recall of a product and the alerting of the public to the risk, or prevent a product from being made available on the EU market (Article 16). Investigations can be launched at a MSA’s own initiative or, alternatively, based on: an entity’s past record of non-compliance, the risk profiling of a product based on the authority’s own assessment, or consumer complaints or information received from a wide range of sources, including other authorities or the media (Article 11(3)). The AI Act ensures that relevant information can be supplied to MSAs by enabling any person or entity suspecting an infringement of the AI Act to make a complaint directly to the relevant MSA (Article 85).

MSAs have extensive investigative powers, including the power to require entities to provide relevant documents and access to embedded software. Further, MSAs have the power to carry out checks of products, including by acquiring product samples under a cover identity, allowing them to inspect samples and reverse-engineer them in order to identify non-compliance.

Not at the time of writing. However, the prohibitions on manipulation entered into force in February 2025, and it is likely cases may start to emerge in the short term.

A key area to monitor is the AI Act’s intersection with the Digital Services Act (DSA). The latter was the first instrument to tackle dark patterns (Article 25(1)) and specifically prohibits online platform providers from designing, organising, or operating their online interfaces in a way that deceives or manipulates their end users, or that materially distorts or impairs the ability of end users to make free and informed decisions.  In its Guidelines on Prohibited AI Practices, the European Commission clarified that the AI Act prohibitions of manipulation are complementary to the DSA’s, and that dark patterns would be captured by the AI Act’s prohibitions if they were likely to cause significant harm. In other words, if a deceptive practice’s level of risk is unacceptable under the terms of the AI Act, then it will likely also be prohibited by the DSA.

The AI Act pathway can provide an opportunity for collective redress insofar as it creates an avenue for companies to be held to account for harmful cross-border practices, helping to overcome a longstanding challenge to fundamental rights enforcement in the EU. Where, pursuant to the AI Act, a MSA considers that non-compliance is not limited to its member state territory, or decides to prohibit or restrict an AI system from being made available or put into service in a national market following an operator’s failure to undertake corrective action, certain notification obligations follow. The MSA must communicate that decision to the Commission and the other Member States. The MSAs in those Member States must then inform the Commission and other Member States of any measures adopted, and any additional information at their disposal, relating to the non-compliance of the AI system in question (Article 79(3     )-(7)).

This legal pathway leverages a fundamental rights strategy. The right to freedom of opinion, as recognised by the EU Charter, is central to the AI Act provisions this pathway relies on.  Furthermore, the AI Act as a whole is premised on the idea of collaboration with fundamental rights authorities in the enforcement of the Act. This is exemplified by the fact that the AI Act enables authorities protecting fundamental rights, upon nomination by Member States, to request access to information or documentation produced to comply with the AI Act, where doing so is necessary to effectively fulfil their mandate (Article 77).

The AI Act’s prohibitions apply after several cumulative conditions have been met. Given that some of these conditions may be difficult to prove, meeting all the conditions can constitute a high hurdle for communities or impacted individuals. The recent Guidelines on Prohibited AI Practices published by the European Commission may ease some of this evidentiary burden. The Guidelines clarify how a material distortion of behaviour is made out and how harm should be understood in the AI context.

The AI Act prohibits conduct that either intends to materially distort a person or group’s behaviour, or which has this effect. The Guidelines clarify that the latter should be considered made out where a plausible or reasonably likely causal link exists between the potential material distortion of behaviour and the subliminal, purposefully manipulative, or deceptive technique deployed by the AI system (Paragraph 76). Further, the Guidelines align the interpretation of ‘material distortion’ with EU consumer law, which provides that there is no need to prove actual distortion of behaviour, so long as it can be shown that a commercial practice is likely to impact an average consumer’s transactional decision (Paragraph 80). The Guidelines go further still, noting that in cases where assessing the perspective of the ‘average’ individual is difficult, MSAs may examine specific cases from the perspective of specific individuals in order to assess how an AI system may undermine individual autonomy in concrete cases (Paragraph 81). To demonstrate how an AI system may have the effect of distorting behaviour (even if there is no intent on the part of the provider to do so), the Guidelines provide an example of an AI-powered well-being chatbot that is intended to support a healthy lifestyle. If the chatbot were to exploit individual vulnerabilities to adopt unhealthy habits or engage in dangerous activities, where it can reasonably be expected that some users will follow that advice and suffer harm, this will amount to a prohibited AI system.

The guidelines further provide that “harm” should be understood to include psychological, financial, and economic harms beyond physical harms (Paragraph 75). The Guidelines note that harms are often not isolated, and when they manifest in combination, can lead to compounded negative impacts (Paragraph 90). The Guidelines take a holistic approach to harms, noting that significant harm should be assessed based on: severity, context and cumulative effects, scale and intensity, affected persons’ vulnerability, and duration and reversibility (Paragraph 92).

In light of the above, it is possible for communities to collect evidence of AI systems deployed by platforms gradually distorting their behaviour, as well as evidence of compounded harms. This evidence can then be submitted to relevant MSAs.

While in principle the prohibitions seek to guarantee individual and collective protection against unacceptable risks, some practical challenges remain. Firstly, the prohibition on manipulative practices applies only to AI systems within the meaning of the Act. Depending on the technology at hand, it may be difficult for individuals to prove that an AI system falls within the scope of the AI Act in the first place.

Secondly, the product safety regulation on market surveillance on which the AI Act relies leaves Member States a wide discretion to choose the conditions under which market surveillance powers shall be exercisable. This leaves open the possibility for Member States to subject the exercise of those powers to court approval or to authorisation by other public bodies (Article 14(3)). 

Thirdly, while the AI Act enables several actors to make complaints to MSAs, it does not require these authorities to actively address and respond to these complaints. MSAs are only required to take complaints into account (Article 85). This is in stark contrast to, for example, the GDPR, which requires data protection authorities to investigate and respond to complaints. The GDPR also creates a right to an effective judicial remedy against data protection authorities that fail to process complaints in line with the standards set by GDPR (Articles 77-78). While the AI Act lacks procedural guarantees, lodging a complaint can trigger an investigation by a MSA. Even if the authority has no obligation to engage with the complainant, it will nonetheless have to investigate an AI system if the information provided rises to a given evidentiary threshold. Further, if the risk is to fundamental rights, the MSA is required to inform and fully cooperate with fundamental rights authorities as appointed by Member States. These authorities have the power to request and access information under the AI Act (Article 77). This opens an additional avenue for public interest actors appointed as fundamental rights authorities under Article 77 AI Act to engage with market surveillance authorities where they believe the threshold for launching an evaluation has been met.

Any individual or entity can submit a complaint to a MSA if they suspect that the deployment of an AI system infringes the prohibitions laid out in the AI Act. Neither the AI Act nor the market surveillance regulation set specific procedural rules to regulate this. 

The AI Act makes clear that lodging a complaint with a MSA is without prejudice to other administrative or judicial remedies (Article 85). In case of inaction by a MSA, the individuals, groups, and/or entities that submitted the complaint can turn to the courts.

There are no strict rules as to the nature of the evidence needed. However, individuals or organisations interested in bringing forward a complaint should ensure that the allegedly prohibited practice not only meets the requirements of the AI Act itself, but also the interpretation of those requirements in the Guidelines on Prohibited AI Practices. Collecting evidence of manipulation, material distortion of behaviour, and examples of actual or possible harms may not necessarily require technical expertise.

Platform Work Directive

By James Farrar

The remote nature of platform work, often carried out under conditions of automated surveillance and algorithmic management, has fragmented work and emboldened platform employers to deny the existence of an employment relationship. The Platform Workers Directive aims to strengthen the hand of platform workers to secure employment status protections.

The Platform Work Directive (PWD) came into force on 1 December 2024. Upon transposition to national law, the directive aims to protect an estimated 40 million workers employed by more than 500 digital platforms across the European Union. 

The remote nature of platform work, often carried out under conditions of automated surveillance and algorithmic management, has fragmented work and emboldened platform employers to deny the existence of an employment relationship. A novel provision of the directive is the presumption of employment for workers when facts indicating direction and control’ are found. This places the burden on employers to prove, based on the principle of the primacy of facts, that there is not an employment relationship. No longer will platforms be able to rely on the use of imaginatively drafted contracts, once dismissed by British MP Frank Field MP as gibberish, to misclassify workers as independent contractors. 

However, even with the benefit of a presumption of employment, workers will still need to make their case to establish the ‘facts indicating control’. While this is a hurdle that should be straightforward for rideshare and food delivery workers, others in emergent forms of platform work may face a more difficult struggle to reach the threshold. Here the directive’s new rules on algorithmic transparency, more far-reaching than those in the GDPR, will prove crucial for establishing the facts of the true nature of the relationship between platforms and their workers. Indeed, algorithmic transparency requirements extend to all platform workers regardless of employment status.

There are comprehensive requirements for transparent automated decisions taken or supported by automated systems that affect persons performing platform work in any manner. This includes any decision-making relating to assignment of work, pay, working time health & safety, and suspensions or terminations. Workers and their representatives must also be provided with information about how automated decision making and surveillance data is used to evaluate workers and how behaviour of the person performing platform work influences the decisions taken. Such information is vital for establishing the extent of ‘direction and control’ for the purposes of contesting status.

However, there remain some uncertainties and opportunities. First, the PWD fails to define a common standard for the purposes of determining the employment status of workers beyond the presumption of employment. Also, it does not address how working time for platform workers should be determined. Many on-demand platform service providers rely on an over supply of workers on stand-by to reliably meet rapid customer response times. Even where employment status has been determined, platforms argue that standby time is not working time and so workers are not remunerated while waiting between tasks. This leads to longer shift times and the attendant health and safety risks.

Crucially, the excess supply of labour relative to demand opens up opportunities for platforms to engage in algorithmic wage discrimination as employers pick and choose which workers will be allocated tasks and how much they will be paid. Over the last two years, many transport work platforms have abandoned transparent time and distance determined pay for dynamic pay systems where pay is highly variable and its determination opaque. While the PWD demands transparency for permitted automated decision making, the GDPR would prohibit automated decision making for dynamic pay because its use could not be qualified as exempt from the general prohibition according to Article 22.2 GDPR.

Thus, there is a three-way causation of harm including the lack of employment status which in turn causes, an excess supply of labour unpaid during stand by time which enables greater algorithmic discrimination relating to dynamic pay and work allocation. 

When implemented in national law, the PWD, will strengthen the hand of platform workers to secure employment status protections. However, the presumption of employment is not a day one right, rather, it is a procedural right in the contest for employment status.

Litigators will have the advantage of a presumption of employment (Article 5), a right to redress for any infringements – this can include redress for workers without employment status for relevant breaches of rules on algorithmic management – (Article 18), a comprehensive right to transparency regarding automated monitoring systems and automated decision-making systems (Article 9). Organisations representing workers will have the right to take group action for the enforcement of rights (Article 19). Courts will have procedural powers to require the disclosure of evidence (Article 21) and employers will have the obligation to carry out, and workers will have the right to access DPIAs (Article 8).

In challenging for employment status, workers might start by asserting the presumption of employment and placing the evidentiary burden on the employer. Workers and their representatives may access data and information about the nature of automated monitoring and decision-making systems. To the extent that such systems are involved in the direction and control of the work, this is important evidence in any challenge of employment status. Beyond this, understanding the span of algorithmic monitoring and control of work will be important in determining that stand-by time is working time.

Key case law to establish employment status:

Case C-66/85, Lawrie-Blum (1986)

A “worker” is a person performing services of some economic value, under the direction of another person in return for remuneration.

C-256/01 Allonby

Formal independence doesn’t disqualify someone from being a worker if they are economically dependent and subject to direction.

C-229/14 Balkaya 

National classifications (employee/self-employed) do not override the EU test. Even trainees, temporary or agency workers may qualify as workers under EU law.

C-147/17 B v. Yodel Delivery Network Ltd (2020)

The genuine discretion over whether to work can be weighed against employment status. This case has been controversial and is seen as a narrow exception.

C-610/18 AFMB (2020)

Primacy of facts to determine the work relationship between parties rather relying on contracts.

Key platform work employment law cases:

Uber v Aslam (UK) 2021, Supreme Court

Established that Uber drivers are workers with basic employment rights. The court found that drivers were obliged to carry out the work personally, were subjected to performance management and that Uber controls the relationship with the customer. 

Uber v Takele (France) 2020, Court of Cassation

Uber drivers were found to be in a relationship of employment with Uber due to the high degree of control over drivers.

Deliveroo (Spain) 2020, Supreme Court

Deliveroo riders were found to be employees because their work was algorithmically directed, the platform controlled pay & pricing and that workers had no meaningful independence.

Glovo (Spain) 2021, Supreme Court

Workers were found to be employees and that Glovo was a logistics company and not an intermediary.

Deliveroo (Netherlands) (2021)

The Federation of Dutch Trade Unions (Federatie Nederlandse Vakbeweging, FNV) successfully argued that Deliveroo was subject to the collective labour agreement for the professional transport of goods. As a result, those delivering for Deliveroo are entitled to statutory protections including the recognition that waiting time is payable working time.

Key case law in determining working time:

C-266/14, CC.OO. v Tyco (2015)

Where workers do not have a fixed work location then travel time to and from home and between clients must be considered working time not least workers are not free to pursue their own interests during this time.

C-518/15,  Ville de Nivelles v Matzak 

Stand-by time which a worker spends at home while being duty bound to respond to calls from their employer within eight minutes, significantly restricting opportunities for other activities, must be regarded as working time. 

Key case law in the restriction of automated decisionmaking:

C-634/21, SCHUFA 

Credit scoring amounts to automated decision in the sense of Article 22 of the GDPR. Platform employers such as Ola Cabs and Free Now have similarly used ‘fraud probability scores’ in the automated decision to assign work to platform workers.

Uber and Ola cases – Amsterdam Court of Appeal

Court ruled that workers were dismissed by automated decision without meaningful human intervention in the case of Uber. The court also held that Ola Cabs had docked the pay of a worker by automated decision without any meaningful human intervention. The court held that dynamic pay systems were also automated decision-making systems in the sense of Article 22 GDPR.

Glovo Italy 

In 2024, the Italian DPA fined Glovo €5 million for the violation of GDPR due to automated decision making relating to the assignment of work, unlawful processing of biometric data and the monitoring of workers outside of working time.

Foodhino – Italy

The Italian DPA fined Foodhino for automated decision making relating to the assignment of work which operated to the detriment of workers with medical issues or with care giving responsibilities.

The combination of invoking of existing employment law, GDPR, and transposition of the PWD will be strategically very important to achieving the overall triple objectives of securing employment status, recognition of all working time and the curbing of exploitative automated decision making related to dynamic pay.

The GDPR is rooted in the European Convention of Human Rights (ECHR) and infringements such as intrusive surveillance and algorithmic wage discrimination is degrading and an offence to the dignity of any human being. Employment rights, likewise, are human rights encompassing key ECHR rights including the right to work, protection from unfair dismissal, fair and just conditions, non- discrimination, collective bargaining, freedom of association, information and consultation and fair and just conditions.

The historic lack of enforcement of employment and data protection law has disproportionately affected the poor and working classes. Rather than rely upon government-led corrective action, workers have had to try to seek remedy through expensive and complex legal and regulatory routes.

Platform employers have anyway defied the law, often secure in the knowledge that there would be no government enforcement and the risk of penalties was not a sufficient motivator of action. Platforms have used complex transnational structures and intermediaries to distance themselves from risk. Contracts with workers are dense and anyway changed at will with no right of negotiation. The PWD will better support workers at least by empowering collective action and entrenching the right of redress. The combination of worker organising, collective action and strategic litigation across employment and data protection law can help workers gain both redress and real leverage.

The presumption of employment is an important unburdening and the right to comprehensive algorithmic transparency can help reveal the true nature of the employment relationship and expose ‘direction and control’. Workers will have additional legal powers to penetrate the veil of intermediary structures through joint and several liability.

However, platform workers may still face years of complex litigation to secure legal classification of employment and to secure recognition of all working time. In the meantime, technology development marches ahead of the regulation with machine learning posing a significant challenge to transparency. Does transparency of yesterday’s automated decision on dynamic pay tell me anything of use about how I am being paid today?

These obstacles are formidable if not insurmountable for individuals but with collective and institutional action over time, workers can then build power, leverage and knowledge necessary to secure their rights.

Any worker can challenge for employment status with the procedural presumption of employment in accordance with national law once transposition of the directive is complete. Any worker, regardless of employment status, can bring a case involving violations of protections against unlawful algorithmic management (Articles 7-12 PWD). Any worker can bring complaints to the data protection authority relating to failure to provide information about automated decision making and to seek the prohibition of unlawful decision-making (Articles 15 & 22 GDPR). Workers may also resort to district courts depending on local procedural rules.

Workers have the right to redress (Article 18 PWD) and the right to participate in group complaints (Article 19). Similarly, Article 80 GDPR provides the right to take collective action.

Despite the presumption of employment, workers will still likely need clear evidence of ‘direction and control during the undertaking of task assignment and waiting time to secure not only status but also all working time pay.  To tackle dynamic pay automated decision-making systems as a driver of precarity, workers will need algorithmic transparency and evidence of harmful impact.

Workers will certainly need access to data science and econometric expertise to provide independent analysis of collective worker pay transaction data over time to identify algorithmic causes of harm and their quantify impacts.

Workers will be able to rely on the information requirements of Article 9 PWD, the requirement to provide access to the DPIA (Article 8 PWD). In addition, workers can rely on Article 21 PWD to petition the court to order access be provided to evidence.

General Data Protection Regulation

By Itxaso Domínguez De Olazábal

The General Data Protection Regulation (GDPR) provides one of the strongest legal tools in the European Union for holding digital platforms accountable for their data practices, particularly those involving automated decision-making (ADM). It offers a layered and flexible framework for challenging discriminatory, opaque, and unfair ADM practices commonly deployed by platforms.

The General Data Protection Regulation (GDPR) provides one of the strongest legal tools in the European Union for holding digital platforms accountable for their data practices, particularly those involving automated decision-making (ADM). While often associated with consent and transparency, the GDPR’s deeper promise lies in its ability to challenge structural imbalances of power by regulating the ways in which data is collected, inferred, and used to make decisions about people–especially those decisions that affect their rights, dignity, and life chances.

A central provision is Article 22, which grants individuals the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal or similarly significant effects. This includes, for example, being denied access to employment opportunities, loans, housing, or welfare based on algorithmic assessments. In practice, platforms are increasingly using ADM systems to manage workers (especially gig workers), assign creditworthiness scores, personalise content, and determine eligibility for services–often relying on obscure profiling logic, proxy variables, and third-party data brokers. In the financial sector, credit scoring systems often serve as powerful gatekeepers, shaping who gets access to essential services such as housing, electricity, or loans. These scores frequently carry legal or similarly significant effects, regardless of whether they are treated as formally binding by the controller. Such scoring decisions are frequently made without meaningful transparency or accountability. These systems routinely affect people’s livelihoods and wellbeing, particularly among racialised, precarious, or low-income communities.

The GDPR enables civil society actors, workers, and affected individuals to challenge these practices by requiring that decisions be made lawfully, fairly, and transparently. Article 5’s fairness principle prohibits arbitrary, biased, or opaque processing. Articles 13-15 require controllers to provide accessible, specific information about the nature and purpose of processing, including the existence and logic of ADM, and its consequences. Article 35 obliges controllers to assess risks to rights and freedoms before deploying ADM systems–a powerful entry point to demand scrutiny of high-risk technologies.

Critically, GDPR enforcement is moving toward a more systemic understanding of harm. Data Protection Authorities (DPAs) and courts are starting to evaluate whether human involvement in ADM is meaningful or merely cosmetic; whether so-called ‘legal bases’ for processing (such as contractual necessity or legitimate interest) are robust and contextually appropriate; and whether platforms are fulfilling their duties to mitigate risks and enable rights. As such, the GDPR offers a uniquely flexible–and rights-driven–framework for contesting platform power.

How can this legal pathway be used to litigate against platforms?

The GDPR offers a layered and flexible framework for challenging discriminatory, opaque, and unfair ADM practices commonly deployed by platforms. Strategic litigation can target multiple points in the data processing chain–from the initial collection of personal data to the final decision-making output.

Litigators can rely on:

  • Substantive rights, such as the right not to be subject to solely automated decisions with significant effects under Article 22;
  • Procedural rights, including the rights to information and access under Articles 13-15;
  • Organisational obligations, such as the requirement to conduct a Data Protection Impact Assessment (DPIA) under Article 35;
  • Structural principles, including fairness, transparency, and data minimisation under Article 5.

Strategic litigation often begins by asserting information rights. Articles 13-15 require platforms to disclose whether they use profiling or ADM, the underlying logic, and the potential consequences for individuals. Coupled with DPIA obligations under Article 35, these rights enable claimants to demand not just transparency, but also pre-emptive scrutiny of ADM systems before they are deployed.

Recent CJEU case law (Dun & Bradstreet (CJEU, 2025)) clarified that the right to access meaningful information under Article 15(1)(h) GDPR is instrumental for exercising the right to contest and seek human intervention under Article 22(3) GDPR. Strategic litigation must therefore assert access to meaningful explanations as a prerequisite for enforcing ADM safeguards.

Legal action can target:

  • The input stage: challenging the legality and necessity of collecting or inferring data used in profiling;
  • The processing stage: interrogating whether the ADM system respects fairness and transparency obligations;
  • The output stage: contesting solely automated decisions that significantly affect rights and interests.

Discrimination often arises not from the explicit use of protected categories of data, but from proxies–such as address, household composition, or educational background–which can reflect and reinforce structural inequalities. This underscores the need to interrogate not only inputs and outputs, but the design logic and data infrastructures ADM systems rely on.

Litigation may also target indirect ADM, where human intermediaries rely on algorithmic outputs, like credit scores, in ways that replicate or validate automated assessments without meaningful scrutiny. Even when a final decision involves some human involvement, GDPR rights may still apply if the influence of automation is dominant or unchallengeable in practice.

Where decisions produce serious effects–such as gig workers being deactivated, loan applications being denied, or welfare benefits being suspended–Article 22 strengthens claims by imposing specific safeguards. These include the right to meaningful human intervention, the ability to express a point of view, and the right to contest the decision. Platforms that cannot demonstrate robust, non-automated review processes are likely to be in breach.

Moreover, litigation can reveal misuse of lawful bases for processing. Many platforms attempt to justify ADM through ‘legitimate interest’ or ‘contractual necessity’, even where these grounds are poorly substantiated. Strategic challenges to the validity of these legal bases–and to the proportionality of ADM systems–can dismantle harmful practices and demand corrective measures.

Even where an ADM system qualifies for the exceptions under Article 22(2) (e.g., contractual necessity, legal authorisation, or explicit consent), controllers must demonstrate strict compliance with the terms of the exception. Explicit consent must meet a higher threshold than standard consent, requiring formal, informed, and voluntary agreement. Likewise, reliance on ‘contractual necessity’ must be narrowly interpreted and justified by the essential nature of the service.

Several cases demonstrate how the GDPR can be strategically deployed against harmful ADM practices:

  • Schufa (CJEU, 2023): In December 2023, the CJEU ruled that the automatic creation of a credit score by SCHUFA, Germany’s largest credit bureau, could fall within the scope of Article 22 GDPR. The Court clarified that even if the score itself is not formally binding, it may still constitute an automated decision under Article 22 if it produces legal or similarly significant effects, such as influencing lenders’ decisions or denying access to credit. The ruling confirms that the structure and impact of the decision-making process, not just its formal framing, determine whether Article 22 applies;
  • Glovo (Italy, 2024): In a landmark decision, the Italian DPA fined Glovo’s Italian subsidiary 5 million euros for GDPR violations linked to the use of ADM in worker management systems. The fine followed an investigation involving reverse engineering which exposed systemic privacy breaches and algorithmic management practices that violated Italian labour law and the GDPR. Although the fine was imposed on the Italian entity, the unlawful practices stemmed from Glovo’s technology developed in Spain. This decision opens the door for similar actions in other countries where Glovo operates; 
  • Unsere Wasserkraft (Austria, ongoing 2024): A GDPR complaint was lodged with the Austrian DPA against Easy Green Energy for automatically rejecting customers based on credit rankings provided by a private scoring company (KSV 1870). The case highlights the growing use of automated profiling to deny access to essential services like electricity, raising concerns about systemic financial exclusion;
  • Portuguese DPA case on exam proctoring (Portugal, 2021): The Portuguese DPA ruled that token human involvement in AI-based exam proctoring systems did not exempt the educational institution from Article 22 GDPR obligations;
  • Foodinho (Italy, 2021): The Italian DPA fined the food delivery platform Foodinho for using a scoring algorithm that penalised riders without ensuring meaningful human oversight, transparency, or fairness. The system disproportionately harmed riders who could not accept shifts due to medical or caregiving obligations; 
  • Uber cases (Netherlands, 2021): Dutch courts ruled that several ADM-related practices by Uber, including driver deactivation and algorithmic shift allocation, failed to meet GDPR requirements. The courts highlighted the lack of adequate human intervention and transparency; 
  • SyRI case (Netherlands, 2020): Although not a GDPR-specific case, the Hague District Court dismantled the SyRI welfare fraud detection system for violating principles of transparency, proportionality, and human rights protections. The case reinforced broader arguments against opaque profiling systems.

Yes. The GDPR’s protective framework can be amplified through:

  • The EU Charter of Fundamental Rights, especially Articles 8 (data protection), 21 (non-discrimination), and 47 (effective remedy);
  • Automated decision-making is not limited to consumer contexts but increasingly conditions access to fundamental rights such as education, housing, and social protection. As such, rights-based litigation must be attentive to how ADM reshapes public service delivery and embeds discriminatory logics into the very systems designed to uphold equality.
  • The Employment Equality and EU Equal Treatment Directives, where profiling or risk scoring leads to indirect discrimination based on race, gender, disability, or age;
  • Consumer protection law, which can address profiling-based price discrimination or exploitative targeting;
  • Labour law, in particular national and EU labour protections. which can be invoked in cases of algorithmic management, performance monitoring, or discriminatory shift allocation;
  • Strategic complaints can combine GDPR violations with breaches of national labour protections. For instance, where algorithmic management practices infringe both data protection rights and labour rights, litigators can argue cumulative violations. This approach, successfully used in the context of platform work, highlights the importance of bridging data rights and employment law to achieve comprehensive remedies.
  • The Platform Work Directive specifically seeks to regulate algorithmic management in platform work by establishing transparency requirements for automated monitoring and decision-making systems, improving access to information and strengthening workers’ rights to contest automated decisions.
  • The emerging frameworks of the Digital Services Act and AI Act, which offer complementary risk-based obligations;
  • Collective redress mechanisms, where multiple claimants face similar harms, enabled by the Representative Actions Directive and Article 80 GDPR.

By situating GDPR litigation within broader legal and societal contexts, civil society actors can address not only individual harms but also the structural dynamics of platform power. However, it is important to be mindful of the principle of ne bis in idem (prohibition of double jeopardy). Combining different legal avenues must be carefully strategised to avoid situations where the same conduct is sanctioned multiple times under different frameworks, which could raise procedural fairness issues. Strategic litigation planning must therefore ensure coherence, respect for procedural safeguards, and complementary–rather than duplicative–use of overlapping instruments.

The GDPR is not simply a compliance regime–it is a human rights instrument rooted in dignity, autonomy, and social justice. It provides a legal and discursive framework to challenge the systemic use of personal data to profile, sort, and exclude. ADM systems do not merely infringe on procedural data rights; they restrict access to livelihood, shelter, and mobility. By asserting data protection as a fundamental right, litigation under the GDPR can help communities reclaim agency and expose the cumulative effects of data-driven injustice.

By challenging discriminatory ADM, activists and litigators can make visible the systemic exclusions encoded in data-driven governance. Cases that reveal the racialised, gendered, or classed effects of profiling–even when no overtly sensitive data is processed–can advance intersectional justice. Moreover, GDPR litigation can spotlight the ways in which platforms outsource governance through ADM: determining eligibility, assigning value, and distributing risk in ways that escape democratic oversight. By asserting rights to transparency, explanation, and contestation, individuals and communities can resist this dynamic and demand public accountability.

 

Section 2. Strategic Litig

While the GDPR offers strong individual rights on paper, in practice, accessing and exercising these rights can be difficult–especially for those most affected by ADM. People impacted by gig work scoring, credit exclusions, or welfare algorithms are often precarious, overburdened, and unsupported. Language barriers, legal complexity, and fear of retaliation can all discourage action.

However, strategic litigation under the GDPR can support community mobilisation when paired with:

  • Data Subject Access Request (DSAR) campaigns to gather evidence, coupled with FOIA requests;
  • Trade union representation in disputes over algorithmic deactivation or performance scoring;
  • Collaborations with digital rights groups that can provide legal and technical assistance;
  • Coordinated complaints under Article 80 GDPR.

The GDPR thus becomes more accessible when used collectively. Civil society organisations, unions, and advocacy networks can play a vital role in submitting access requests, coordinating complaints, supporting litigation, and translating technical processes into actionable claims.  Where organisations combine legal, technical, and community organising capacities, the GDPR can become a powerful tool to contest platform injustices and build movement-led strategies. These approaches reduce the individual burden and embed GDPR enforcement within collective action strategies.

Several challenges are likely to emerge:

  • Opacity of systems: platforms often conceal how ADM works, citing trade secrets;
  • Superficial compliance: privacy policies are vague, and responses to access requests are incomplete or automated;
  • Limited transparency tools: DSARs may yield incomplete or misleading information, especially where ADM logic is not clearly documented;
  • Asymmetric power: affected individuals often face information and resource deficits;
  • Over-reliance on soft enforcement: some DPAs delay investigations or hesitate to sanction systemic violations;
  • Inconsistent enforcement: some DPAs are under-resourced or reluctant to act on complex, systemic complaints, with cases sometimes taking years;
  • Misleading legal bases: platforms misuse GDPR Article 6(1)(b) (contractual necessity) or Article 6(1)(f) (legitimate interest) to justify profiling.

These challenges highlight the importance of coalition-building and strategic foresight. Movements must prepare not only legal arguments but also supporting public narratives, investigative tools, and multi-sectoral alliances.

  • Any individual data subject may file a complaint with a DPA under Article 77 GDPR;
  • Mandated NGOs under Article 80 GDPR may file complaints on behalf of individuals;
  • Individuals can seek judicial remedies under Article 79 GDPR if their rights have been infringed.

While the GDPR does not in itself create a stand-alone cause of action in court in every Member State, it enables individuals to invoke national remedies when their rights are violated, depending on the applicable procedural law. Strategic complaints benefit from accompanying public advocacy and coalition-building.

Moreover, enforcement can–and should–be initiated not only through individual complaints but also ex officio by proactive DPAs when ADM systems pose systemic risks to rights.

Strategic litigation against ADM systems requires a mix of legal, technical, and testimonial evidence, including:

  • DSARs that reveal personal data, profiling logic, or risk scores;
  • Copies of DPIAs, which platforms are required to conduct under Article 35 GDPR;
  • Internal documentation, such as protocols, templates, or policies obtained via leaks or testimony;
  • Code or system audit results, especially where reverse engineering is possible;
  • Worker and community testimony, documenting structural patterns of exclusion;
  • Media and academic investigations that corroborate platform behaviour.

Technical evidence gathering can be significantly strengthened through reverse engineering and black-box testing of apps and platforms, especially where direct access to ADM systems is impossible. These methods could allow researchers and litigators to infer how decision-making systems operate in practice, uncovering unfair or discriminatory profiling without needing source code access. Successful investigations rely on the informed consent and active cooperation of workers, supported by trade unions or civil society organisations, as well as that of other affected communities, to ensure ethical standards are upheld and that evidence is robust enough to support legal challenges.

Litigators should also consider gathering evidence about the controller’s organisational environment, including reporting lines, chains of approval, and training practices, to challenge claims of meaningful human intervention in ADM systems.

In the context of credit scoring, evidence gathering should aim to reveal the full chain of decision-making, including the source of data inputs, how scores are calculated, how they are shared, and how they affect downstream access to services. DSARs must explicitly include requests for explanations of inferred or derived data, profiling criteria, and any recipients of scoring outputs. Where scores are used to influence decisions without adequate oversight or recourse, this may support a claim of ineffective human intervention under Article 22(3).

Last but not least, controllers must disclose explanations about the procedures and principles underlying automated decisions. The complexity of the processing cannot be used to excuse opacity: explanations must be concise, intelligible, and accessible, as required by Article 12 GDPR. In line with recent CJEU case law (Dun & Bradstreet (CJEU, 2025)), controllers cannot invoke trade secrets or the protection of third parties’ personal data as an automatic ground to refuse access to meaningful information about ADM systems. While a balancing exercise is necessary, the right to explanation–and the effectiveness of data subjects’ rights under Articles 15 and 22 GDPR–must prevail where essential to enable individuals to contest or understand an automated decision.

Corporate Sustainability Due Diligence Directive

By Lori Roussey

The Corporate Sustainability Due Diligence Directive aims to establish a level playing field for corporate responsibility on human rights and environmental protection as well as reparation. The accountability pathways of the CSDDD encompass civil liability, administrative enforcement, and company-level grievance mechanisms.

In the period between the Bophal tragedy of 1984 in India and the collapse of the Rana Plaza in 2013 in Bangladesh, a litany of oil spills and disasters destroyed entire communities and the environment. With little to no consequence for multinationals. The Rana Plaza disaster illustrated how thousands of women risked their lives in Global South countries while working to produce goods for Global North markets. This tragedy accelerated pressure from French civil society and unions to address corporate impunity regardless of where multinationals operate. In 2017, France adopted a strict legal framework on corporate responsibility. This law was rooted in the 2011 UN Guiding Principles on Business and Human Rights, with the addition of environmental protection. This initiative was then emulated amongst other EU Member States, such as Germany, Poland, and Spain. Civil society then turned to Brussels, pushing for harmonisation. This push led to a corporate responsibility Directive.

The Corporate Sustainability Due Diligence Directive (2024/1760) (CSDDD) aims to establish a level playing field for corporate responsibility on human rights and environmental protection as well as reparation. It applies to all EU multinationals employing over 1000 people and with a 450-million-euro turnover, covering their activities in the EU and around the world. Additionally, the Directive applies to parent companies of multinationals, certain franchises, and foreign multinationals operating in the EU once their turnover in the EU is over 450 million euros. Companies in scope will need to oversee and implement human rights and environmental safeguards, while aiming for carbon neutrality by 2050.

The CSDDD, like the UN Guiding Principles and the French due diligence law, adopts a purposefully broad interpretation of corporate power. Obligations on multinationals extend to ‘their subsidiaries, and the operations carried out by their business partners in the chains of activities of those companies’ (Article 1). This specification bears echoes of the Rana Plaza tragedy, where victims could only seek the accountability of local businesses. The material scope of the Directive thereby embeds sustainability throughout the production chain. It should be noted that, in February 2025, the Commission proposed to amend the Directive to narrow its scope to companies and their business partners, excluding indirect partners unless ‘plausible information’ indicates otherwise.

Amongst the multinationals in scope, we find United States technology platform companies like Google, Meta, Amazon, and Apple. We define ‘platforms’ as gatekeeping companies whose business model substantially relies on or facilitates the processing of personal data. The majority of multinationals covered by the Directive are European companies, estimated to be around 6000 by the European Commission, compared with 900 companies from outside the EU. With respect to platforms, the CSDDD should, for instance, enable actions seeking remedies for pollution generated by the data centres of American multinationals in Europe or hold European companies accountable for the training of their technologies on populations of the Global South.

Now that the Directive has entered into force, the phase of transposition by Member States begins. By the end of 2027, all EU countries must have adapted their legal frameworks to the Directive. In 2028, enforcement can begin for selected companies. All companies in scope will be facing enforcement by 2029.

Even with the potential simplification measures, prominent platforms remain in scope, with litigation able to start from 2028. The accountability pathways of the CSDDD encompass civil liability, administrative enforcement, and company-level grievance mechanisms.

Regarding administrative enforcement, injunctive orders to cease certain actions or repair, requests for documents, and sanctions of up to 5% of global net turnover are available to national Supervisory Authorities. Such sanctions can follow investigations undertaken by Supervisory Authorities on their own initiative or in response to ‘substantiated concerns’ reported by a legal or natural person (Article 28). Sanctions harmonisation will be ensured in the long run via the European Network of Supervisory Authorities.

Civil liability damages may be sought in national courts for harm caused intentionally or negligently (Article 29). The fact that third-party entities are covered by the CSDDD due to their association with multinationals is particularly relevant for this pathway, as the result is that most of the value chain is covered. That said, the CSDDD is only meant to establish a basic standard. If civil liability rules are more severe in the Member State where litigation takes place, national rules should be relied upon to seek stronger impacts. An important caveat nonetheless applies: Article 29 only encompasses harm caused to 1) a legal or natural person 2) whose interests are protected by the law of their country. This has the effect of restricting environmental claims. For example, if the French biometrics leader Idemia were to pollute a river in Laos, it is not possible to litigate and demand reparations if the river is not a legal person.

Lastly, the CSDDD mandates the creation of publicly available complaint mechanisms (Article 14) specifically available to a broad range of stakeholders beyond harmed individuals and civil society organisations.

For all three of these pathways, unions and non-governmental organisations (NGOs) that support individuals can be in the legal driving seat, enabling them to effectively defend against egregious impacts of platforms. This means that the CSDDD could for instance be the launchpad for private enforcement initiatives seeking platform accountability for the pollution and disproportionate energy consumption by data centres. To illustrate with European data, in Ireland a fifth of energy consumption is already attributable to data centres. Actions could also be taken to fight the illegal purchase of land for the hosting of data centre factories. Furthermore, the CSDDD could offer recourse to challenge the contracting of companies to hire underpaid, precarious workers for content moderation who are then vulnerable to developing trauma. This practice was relied upon by Open AI in Kenya to train ChatGPT, and Facebook too exploited an outsourced workforce. The Annex of the CSDDD lists rights that are protected in international instruments and so provides information as to which instruments are available to impacted communities in countries outside of the EU.

While in the context of the CSDDD there is still no case law available, its French counterpart has been tested in court. The first ruling on the French due diligence law sanctioned the French public postal service for failing to ensure safe working conditions for the often undocumented workers of its contractors, as well as of the contractors of these contractors. As a result, the postal service was not subject to financial sanctions; it was required to document in its public due diligence assessments that working conditions violating human rights are amongst the risks it must pay particular attention to.

Several cases targeting the environmental harm of French multinationals Total Energy and Danone reached the courts before the above ruling. The Total Energy case in Uganda is still pending, and the Danone case ended with a mediated negotiation. In another case, with the collaboration of Data Rights, a coalition of Kenyan NGOs challenged the corporate responsibility of the French multinational Idemia, a global leader in biometrics authentication and surveillance. Idemia was contracted by Kenyan authorities to deploy biometric identification solutions. The system put in place violated numerous human rights. The primary goal of the case, brought before French courts in 2022, was to have the courts find that the identification system was discriminatory and harmful to all Kenyan citizens. However, due to the lengthy litigation processes, the coalition decided to forego the uncertainty of the court decision and opted for a mediated negotiation with Idemia instead. This led to the company changing its public due diligence documentation in 2023.

Aside from embedding Member States’ laws into litigation strategies, other instruments may prove complementary. To build on the example above, Data Rights decided to join the coalition of NGOs in the litigation against Idemia not only to show solidarity, but also because it presented a strategic opportunity to lodge a regulatory complaint against the French multinational based on the General Data Protection Regulation (GDPR). The GDPR, like the CSDDD, follows European companies where they operate. This trans-jurisdictional scope is often overlooked in the compliance programmes of multinational entities. Given that the GDPR applied to many of the concerns raised, the case against Idemia was an opportunity to illustrate the GDPR’s jurisdiction and pave the way for ambitious international cases. More generally, the addition of the GDPR prong to the litigation strategy was warmly welcomed by the Kenyan victims and their law firm. It was seen as complementing the corporate responsibility case by adding financial and reputational sanctions that may more reliably influence the behaviour of the multinational. Thereby furthering the reparation of the harm experienced by the Nubian tribe of Kenya. Our GDPR case, joined by Algorithm Watch, is pending.

With respect to evidence collection, the actions of multinationals abroad present substantial challenges. For the Kenyan case, the coalition was supported by internationally renowned investigatory entities. Yet, documents were still missing. Freedom of information laws and the GDPR’s Article 15 right to access are therefore both fundamental to evidence gathering. We are now seeing how the experience of digital rights organisations working with GDPR enforcement is instrumental to lawyers bringing CSDDD cases. Notably, the GDPR is a complementary tool to the CSDDD and national laws because, like the CSDDD, it recognises power dynamics. Indeed, contractors and subsidiaries are to be treated with only via the multinational, as it is, in GDPR terms, the sole data controller. Data Controllers are at the top of the pyramid, setting the purpose and the means according to which personal data is processed.

With regards to collective redress, the CSDDD does not establish a specific EU-wide mechanism. Instead, it leaves the provision of collective legal actions to the discretion of EU Member States. Furthermore, the CSDDD is not encompassed by the EU’s Representative Actions Directive (2020/1828), which is limited to specific consumer protection laws. As a result, violations of the CSDDD are not automatically subject to the collective redress mechanisms provided under that Directive. Instead, this will be dependent on Member State law

The layered transposition timeframe of the CSDDD means that for litigation efforts to be brought before a judge or regulator, harm must still exist when the Directive becomes enforceable against the targeted company.

Both the CSDDD and existing national laws have ambitious and wide scopes. It is important to take the time to map which entities are in scope for each case, as it may be that the net of accountability is wider than expected, creating more opportunities to evidence harm. This is especially relevant where harm has been suffered by marginalised communities who do not possess the necessary financial resources and expertise to seek justice. The wider the group of victims, the more illustrative examples that can be found to show philanthropic organisations.

As previously mentioned, freedom of information and data protection can be levers to obtain evidence. It is worth noting that data protection best practice requires the deletion of personal data after some time–therefore, measures taken with the company directly to ‘freeze’ data are sometimes needed.

When evidence cannot be shared because it was obtained in a certain way, such as through a whistleblower or an investigative journalist, turning to administrative authorities can be a useful avenue for evidence gathering. Parties can request that an investigative authority initiate an investigation in the hopes that it will independently reveal the evidence which the party already has (but cannot share). This ability for stakeholders to ask for investigations into phenomena they cannot explain is a robust counter power.

For environmental cases, the Access to Environmental Information Directive (2003/4/EC) transposes the UN Aarhus Convention and exists specifically to help citizens gather data. This tool is often relied upon by strategic environmental litigation NGO Client Earth, and it could complement any CSDDD environmental case.

An important restriction of the CSDDD is that it will only support protections existing in the local laws where the harm takes place. This may prove arduous for international cases. That being said, opportunities for transnational enforcement may be found in the CSDDD Annex’s list of international treaties and conventions.

Another way to protect individuals or the environment, even if such protections may not exist locally, is to invoke other European laws that create this protection overseas. In the pending Idemia case, using the GDPR meant that the French GDPR regulator became the regulator charged with protecting the interests of members of the Nubian tribe in Kenya. Since this authority has the power to obtain documents from the multinational, this avenue might become a means to obtain evidence that local communities were not able to obtain.