Skip to main content

How can the DSA Guidelines on Election Integrity be improved?

The recently published European Commission’s draft guidelines for Providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) on the Mitigation of Systemic Risks for Electoral Processes address the risks of actual or foreseeable negative effects on electoral processes stemming from the design, functioning and use of services offered by VLOPs and VLOSEs within the meaning of Article 34.1(c) of Regulation (EU) 2022/2065. In this joint paper the Civil Liberties Union for Europe (Liberties) and the European Partnership for Democracy (EPD) outline their feedback to the consultation on the Guidelines and provide recommendations for improvement, among others:

Main recommendations:

  • Include mitigation measures for risks to civic discourse in the Guidelines
  • Take into account additional risks and related mitigation measures outlined below
  • Organise meetings on specific topics with both VLOPs/VLOSEs and civil society/experts, to exchange recommendations and best practices
  • Exhausts all available means to guarantee the implementation of the Guidelines with robust enforcement mechanisms

Liberties and EPD’s feedback to the Guidelines

The Civil Liberties Union for Europe and the European Partnership for Democracy is of the opinion, that while Article 34.1(b) of Regulation (EU) 2022/2065 is ultimately aiming to protect the human dignity of individuals affected by the design or functioning of VLOPs’ and VLOSEs’ services and their related systems, and Article 34.1(c) as aiming to protect the fundamental interest of the European electorate to live in stable and well-functioning democracies (cf. Recitals 81 and 82 of DSA). 

While we fully appreciate the European Commission’s rationale for narrowing the focus of its guidelines to electoral processes, particularly in light of the imminent European Parliament elections, the chosen approach is not without its limitations. Addressing immediate, high-profile risks during the pre-election period is crucial, yet concentrating on those only fails to capture the broader, ongoing challenges to democratic discourse that occur at all times and while less visible, have a non-negligible impact on the electoral process. Expanding the guidelines to encompass a wider range of influences on civic discourse, the Commission could offer support for VLOPs and VLOSEs in designing more robust protections for the democratic process.

At the same time we highly value the recommendations presented in the Guidelines, particularly around the moderation of the virality of content as well as on fostering third-party research as there are many research questions still open on the effectiveness of specific mitigation measures. 

Below we will describe the additional risks and some of the possible mitigation measures we identified in our literature research (“Identifying, analysing, assessing and mitigating potential negative effects on civic discourse and electoral processes”). We would like to emphasize that we take it of utmost importance that more extensive mitigation measures regarding political advertising are implemented, in particular, to stop the processing of all observed and inferred data in the targeting of political advertising.

  1. Content amplification and algorithm curation

It is to be noted that not only measures to tackle illegal content (Proposed Guidelines 3.1.22) can silence certain groups in society. Certain modes of content amplification and algorithmic curation can also demote or fully silence certain voices, leading to a lack of inclusivity in online discourse, and the lack of information voters receive on the viewpoints of marginalised groups on social issues discussed during the electoral campaign. Affordances can influence how certain social groups interact with each other. In particular, affordances that encourage uncivil discussions may lead to already marginalised groups silencing themselves to avoid online or even offline aggression. Suboptimal design choices concerning accessibility can also lead to similar results. 

Related mitigation measures recommended: VLOPs and VLOSEs should consider integrating design elements that foster inclusive and less polarising discussions. In addition, they should clearly communicate rules of engagement to improve the civility of interactions among users. VLOPs and VLOSEs should also consider implementing design features that encourage inclusive discussions and give visibility to marginalised voices while avoiding opinion bubbles by ensuring access to a plurality of voices. In addition, VLOPs and VLOSEs should introduce simplified language options, auto-translation, dictation features, and auto-generated captions to make digital content more accessible.

  1. Exacerbation of polarisation

Another risk VLOPs and VLOSEs should consider is how their services may exacerbate polarisation in society. This risk materialises as platforms, through design and moderation policies, inadvertently reduce the diversity of information accessible to individuals. This phenomenon, termed “polarisation by design”, facilitates the spread of content that is not only divisive but also emotionally charged, exacerbating societal divisions and undermining free and fair discourse.

Related mitigation measures recommended: VLOPs and VLOSEs have several options on how to tackle polarisation. We would like to emphasise here two of them. First, VLOPs should consider not using design features that facilitate quick negative responses to opposing views. Second, VLOPs and VLOSEs should develop algorithms that offer a balanced information diet, exposing users to a variety of viewpoints, particularly on controversial topics.

  1. Overremoval of content protected by copyright

An additional risk we would like to call attention to is the overremoval of content for copyrighted claims in political messaging. While copyright laws are designed to protect intellectual property, their application can inadvertently impact freedom of expression, especially when such material is used in a political context. This tension often manifests in claiming the removal of content that potentially infringes on copyright, but serves a significant political or public interest purpose.

Related mitigation measures recommended: To tackle the challenges linked to content under copyright, VLOPs and VLOSEs should prioritise the investigation of copyright claims and balance exceptions and limitations where exist, ensuring that legitimate uses of copyrighted material for political purposes are not unfairly penalised. In addition, they should establish robust and quick appeal processes for users to challenge unjustified removals/inability to upload, ensuring that decisions are fair and consider the context of use.

  1. Organised attacks against civil society

It also needs to be mentioned that the online sphere created increased opportunities for malign actors to perform organised attacks against civil society with a much wider reach than in the offline world. There is also evidence of such extremist content being amplified on some platforms by recommender systems because it is polemic and hence creates more engagement. Such a phenomenon risks the silencing of civil society organisations and citizens willing to engage in discussions pertaining to the electoral campaign. 

Related mitigation measures recommended: To mitigate the risk of organised attacks against civil society, VLOPs should adopt a CSO-specific strategic policy document recognising the need to protect civic space from organised online campaigns. In addition, they should put in place effective response mechanisms and best practices to address attacks against civil society

  1. Micro- and nanotargeting of political advertising

While the proposed Guidelines address the issue of political advertising, it needs to be emphasized that algorithmic systems online make it possible to target political advertising to voters with extreme accuracy. Micro- and nanotargeting practices are particularly risky ahead of elections, as they could target swing voters and deliver tailored political messages that could even be in direct contradiction to those targeting different groups. 

Related mitigation measures recommended: To tackle risks linked to political ads, VLOPs and VLOSEs should stop the processing of all observed and inferred data in the targeting of political advertising. In addition, they should restrict options available for the targeting of political advertising for provided data.

Additional measures VLOPs and VLOSEs should consider regarding political advertising are: a) conduct an analysis of the effectiveness of automated filters to identify political ads, b) ensure human oversight on automated filters to be able to identify mistakes, c) monitor advertising on pages in political categories more strictly, d) ensure stricter consequences for repeated violations of requirements for political advertising, e) guarantee consistent performance of automated filters independent of an ad’s language.

  1. Preferential treatment accorded to high-profile political figures

High-profile politicians often benefit from less stringent content moderation on social media, as seen with posts by figures like former US President Donald Trump. This preferential treatment, often justified by the newsworthiness of their statements, poses risks to the democratic process, including the potential for inciting real-world violence and fuelling distrust in democratic institutions.

Related mitigation measures recommended: To mitigate risks posed by the preferential treatment accorded to high-profile political figures, VLOPs should implement a policy of equal content moderation for all users, regardless of their political or societal status, which can help ensure that dis/misinformation and harmful content are adequately addressed. VLOPs should also educate users on the potential for dis/misinformation disseminated by political leaders.

Mitigation measures linked to Generative AI

Here we agree with the online political advertising watchdog Who Targets Me, that platforms should avoid offering generative AI capabilities for the creation or enhancement of political advertisements. It poses an unnecessary risk to the electoral process.

In addition, VLOPs and VLOSEs should consider banning the utilization of generative AI in every advertisement (see “Ad transparency – what’s missing for 2024?”).


In conclusion, we very much welcome the Guidelines as they include a very comprehensive set of recommendations on mitigation measures for electoral processes. We particularly appreciate the effort to involve civil society and researchers in the guidelines as a fundamental actor in defining effective mitigation measures for electoral processes. We also welcome the awareness that this is a starting point – specifically shown by the 1-year review deadline. On the other hand, to have an even more open dialogue we would suggest organising meetings on specific topics with both VLOPs/VLOSEs and civil society/experts, to exchange recommendations and best practices. 

Finally, it is of utmost importance that the risk assessment/risk mitigation process does not become a tick-box exercise for VLOPs and VLOSEs. For this reason it is imperative that the Commission exhausts all available means to guarantee the implementation of robust enforcement mechanisms, in strict accordance with the provisions of the Digital Services Act (DSA).

Cover credits: European Union, 2022. Photographer: Nicolas Peeters.