Is Election Integrity Integral to the Artificial Intelligence Act?
The Artificial Intelligence Act (AI Act), the new EU Regulation that introduces rules for AI systems according to their risk level, was published in the Official Journal. The rules are going to be implemented over the next years with a phased application period.
As a horizontal framework, the Artificial Intelligence Act has the potential to impact a wide range of issues linked to fundamental rights, including the right to vote as in Article 39 of the Charter of Fundamental Rights of the European Union. Furthermore, the Regulation itself mentions in the Recitals (Recital 1, 2 and 8 for example) as well as in Article 1 that it has as an objective, among others, the protection of democracy and the Rule of Law as well as the protection of fundamental rights included in the Charter of Fundamental Rights of the European Union. This includes Article 39 of the Charter on the Right to vote and to stand as a candidate at elections to the European Parliament. Recital 48 also sets criteria to define high-risk AI systems, including harms to fundamental rights among the risks, including the right to vote.
Based on the above considerations, while the Artificial Intelligence Act has no specific purpose to protect election integrity against the use of AI systems, it is still susceptible to being used as a tool to ensure free and fair elections, by protecting them from the potential negative impact of certain AI systems. How exactly, that is quite a different matter.
In the AI Act, there are several sections that refer either explicitly or implicitly to AI systems with the potential to impact election integrity and related solutions to limit such impact according to their risk level.
In particular:
- prohibited AI systems, which cannot be deployed on the EU market;
- high-risk AI systems, which will need to comply with specific obligations such as conducting risk assessments and putting forward mitigation measures;
- limited-risk AI systems, which will have to comply with specific transparency requirements;
Many AI systems often mentioned as a threat to elections including deepfakes, General Purpose AI (GPAI) and chatbots are mostly considered in the limited risk category – which also provides the lowest level of protection. The question we will try to answer in this paper therefore is: what kind of AI systems linked to elections would be included in the other two (more protective) categories?
Photo by Siarhei on Adobe Stock.