Skip to main content

AI regulation: challenges and opportunities for democracy and EPD

This blog explores the current challenges and opportunities Artificial Intelligence (AI) poses to established and emerging democracies. 

“Artificial intelligence” (AI) is a buzzword on everyone’s lips that comes with its share of promises and controversies. Beyond the hype, the European Union is planning on regulating these technologies through an upcoming legislation expected at the beginning of 2021. This is the opportunity for EPD and its members to be involved in the debate, to help shape a human rights-based regulation of AI with democratic safeguards.

AI and automated decision-making (ADM) systems are already being deployed all over Europe and are actively influencing citizens’ lives. Opaque algorithms are at the centre of issues such as online advertising transparency or the dissemination of disinformation. These systems have the potential to regulate and govern parts of the public sphere, from being used by public administration and by political parties to be used by private actors on the internet, where they are used to control online content and access to information. More broadly, AI and ADM systems allow for the use of data processing at scale, which can then be used for multiple useful purposes, allowing for automated and targeted public services. However, it can also be used by malicious actors, for example, to surveil and influence voters and target specific communities with political ads or disinformation.

AI and ADM systems are being used everywhere, affecting countries in democratic transition as well as established democracies, playing a role in their electoral processes and governance. Moreover, it can have broader repercussions on citizens’ participation in democracy, faith in democratic institutions and trust between citizens and the state.  Therefore, getting the regulation of digital technologies right is of fundamental importance to democracies around the world. But the complexity and opacity of AI and ADM systems make digital policy particularly difficult for Members of Parliament and civil society to work on.

Lack of accountability and democratic debates

In the public sector, the deployment of AI and ADM systems poses serious concerns of accountability and transparency and often lacks evidence of necessity and proportionality. Moreover, these systems are introduced without public warning, debate or democratic oversight. Citizens and civil society but also researchers do not know where and when such systems are deployed, how it affects them and are not consulted on their use. States often argue that the deployment of these technologies is inevitable economic progress and social advancement, shutting down democratic debates on the balance of benefits and dangers such technologies might bring.

Some might also be plunging into tech solutionism – thinking that technologies are the solution to a wide range of issues, including societal, social or health-related ones. Facial recognition deployed in public space after terrorist attacks or demonstrations, or the recent COVID-19 tracing apps can be examples of this phenomenon, where there is a lack of evidence of efficiency.

While algorithms can help reduce cost and improve efficiency in certain cases, it has to be properly balanced against the concerns and threats it can cause to democratic principles and fundamental rights. Meaningful participation and inclusion of citizens in digitalised governance means checks and balances as part of a democratic process and debate. This process should also allow for a right to explanation and a right to challenge automated decision making.

A threat to equality of citizens

AI/ADM systems often come with self-reinforcing bias, as they tend to replicate how things were working in the past. Even though equality of citizens is a democratic principle, the reality of equal treatment is often tainted by society’s biases on race, gender and towards minorities. Contrary to the claim that algorithms are neutral and objective, they actually reflect these same biases but on a much wider and automated scale. Large-scale discriminations and exclusions are therefore possible, while the opacity and complexity of the systems can mean that the consequences remain unknown or unresolved. Beyond the technical issues surrounding bias in algorithms, transparency, explainability and accountability are key measures to the deployment of AI/ADM systems – if there is ever a real need for them.

Unwarranted government surveillance

However, some use of AI/ADM systems should not be deployed at all, as the impact on freedoms and rights of citizens would be too great. Surveillance and profiling technologies such as facial recognition, when deployed in public spaces, are intrusive and can greatly disturb citizens’ lives. It can have a chilling effect on their freedoms of expression and assembly and can limit their ability to participate in public, social or democratic activities. The majority of member states in the European Union are already experimenting with such technologies, without any safeguards. As a result, organisations such as European Digital Rights (EDRi) are calling for a ban on biometric mass surveillance.

Social media and AI

Alongside the deployment of AI/ADM in the public sector, private actors are also using such systems in our online sphere, through content moderation and curation, critically affecting democracy as well. As independent media is struggling to remain operable with severely reduced resources, the nature of political communication has changed in fundamental ways. The platforms’ own algorithms are used to amplify certain phenomena that are harmful to democracy, like coordinated disinformation campaigns and online hate speech, and provide key avenues for big money and foreign interference in political campaigning. This creates an unlevel political playing field, incentivises sensational rather than nuanced news and communication, and restricts space for a pluralist democratic debate.