The EU Artificial Intelligence Act and the Future of AI Regulations

By: Eliana Aemro Selassie

Edited by: Micah Sandy and Alexandria Nagy

On March 13, 2024, the European Parliament approved the Artificial Intelligence Act, a series of safety measures and regulations that also ensure there is flexibility for AI corporations to foster innovation and technological advancements [1]. The act is the first “comprehensive regulation on AI by a major regulator” in the world, marking a monumental milestone in the regulation of AI [2]. As AI has become increasingly powerful and new advancements in the field have increased the ability of AI algorithms to operate without human input, governments are becoming more concerned with regulating these innovations, especially since they can be weaponized for harm. The EU is at the forefront of these regulations by being the first governing body to produce an act with tenable measures to regulate AI.

IBM defines AI as “Technology that enables computers and machines to simulate human intelligence and problem-solving capabilities” [3]. AI has been extremely impactful because of its ability to conduct complex tasks that would typically require human input and direction. In the last year, AI has undergone considerable developments, specifically generative AI which has dramatically increased the capacity of AI to conduct tasks free of human input [4]. The independent abilities of generative AI poses many potential benefits since it simplifies complex and mundane tasks typically done by humans. Generative AI is particularly easy to integrate into the workplace because its rapid summarizing capacity has surpassed the limits of human memory, reduces the need for humans to conduct mundane tasks, and increases overall productivity. 

Nonetheless, AI also poses many security and privacy risks to individuals and societies [5]. When AI-driven tools are used in policing, they disproportionately target low-income people and people of color as a result of biased algorithms. Stanford University also describes the potential harms of using AI in legal matters, notably sentencing decisions means “This dangerous reality means that an algorithmic estimate of an individual’s risk to society may be interpreted by others as a near certainty—a misleading outcome even the original tool designers warned against.” [5]. AI also poses privacy risks, since algorithms are designed to make decisions autonomously and without human intervention, a factor that can lead these algorithms to rapidly collect and analyze data [6]. The use of AI in biometrics is another growing concern for governments and individuals alike, given AI’s extensive ability to use and analyze facial recognition, fingerprint recognition, and iris recognition, making tracking and identifying people a much simpler process [7]. Many view this as a substantial privacy violation and note the potential for these biometrics to be weaponized for harmful means.

The EU’s new act aims to address these risks posed by AI and do so in a manner that supports both citizens and corporations. The act is unprecedented because it provides a distinct framework for the specific uses of AI for both business owners and citizens. It acknowledges that most AI systems are largely harmless, and instead aims to mitigate and minimize the larger risks posed by harmful algorithms, to ensure the safety of EU citizens. The act divides AI algorithms into 3 categories: high-risk, limited-risk, and minimal-risk. High-risk AI refers to AI that is used in any context that poses a threat to the safety, privacy, and well-being of European citizens, corporations, and infrastructure. This refers to the use of AI in things like critical infrastructure, notably transportation, the employment and management of workers, law enforcement, and migration and border control management. Any AI system that falls into this category requires extensive security measures and accuracy approvals from the European Parliament. Limited-risk AI refers to AI algorithms with limited transparency, which pose risks to users as a result of the lack of available knowledge and information on how the software operates. These algorithms are subject to supervision and security but to a lesser extent than the high-risk algorithms. Finally, minimal-risk means that there are no regulations outlined in the act because those softwares do not pose a severe threat [8].

Another critical component of the act that sets it apart and signals it as a sign of progress is that it takes a human-centered approach. One of the primary goals of the act is that it pushes to keep European technology at the forefront of AI research while ensuring that the democratic and civil rights of European citizens are also protected. These measures came as a result of the Conference on the Future of Europe (COFE), which was made up of a series of citizen’s proposals including increased AI regulations and laws. One of the proposals suggested “enhancing EU’s competitiveness in strategic sectors” through AI law and another encouraged creating “a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control”. The act reflects this proposal since one of the key components is that it bans any AI applications that “threaten citizens’ rights” [1].

Another key component of the act is that it outlaws the use of biometrics in law enforcement except in particularly drastic circumstances like a terrorist attack and requires judicial authorization to do so. The act bans emotion recognition software from being used in the workplace and schools and forbids AI that “manipulates human behaviour or exploits people’s vulnerabilities”. Nonetheless, the act upholds COFE’s efforts to prevent technological innovation from being stunted. It accommodates the use of AI in public works, public policy, and infrastructure, enforcing the use of extra precautions and protective measures in these circumstances to allow AI to facilitate technological progress without putting citizens at risk. This ultimately demonstrates how the EU AI Act is unprecedented in nature, working to target the specific risks that dangerous algorithms may pose while simultaneously preventing scientific progress from being delayed. Brando Benifei, the Internal Market Committee co-rapporteur of the European Parliament described the act saying “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected” [1].   

Given the extent of this progress in Europe, it calls into question the potential for similar AI regulations to be created elsewhere in the world. The government of the United States has similarly made strides to regulate AI, notably in the White House which has developed a blueprint for the AI Bill of Rights. The proposed bill emphasizes the potential for AI to produce biased and discriminatory algorithms, unfairly collect social media data and violate privacy, and provide unsafe patient care and medical systems. The Bill breaks down different types of AI into 5 categories: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. Its ultimate goal is to address the safety and discriminatory concerns of AI in a similar fashion to the EU AI Act [9]. However, bipartisan disputes and the importance of technological innovation in the United States question whether the Bill or similar legislation will soon be enacted [10]. Nonetheless, the EU AI Act shows considerable progress in global policy-making to ensure that as AI advances and becomes increasingly complex, policies to regulate these advancements are implemented alongside it, thus ensuring the protection of people and allowing for technological developments.

Notes:

  1. European Parliament. 2024. “Artificial Intelligence Act: MEPs adopt landmark law | News.” European Parliament. https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.

  2. EU Artificial Intelligence Act. n.d. “The EU Artificial Intelligence Act.” EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. Accessed May 21, 2024. https://artificialintelligenceact.eu/. 

  3. IBM. n.d. “What is Artificial Intelligence (AI)?” IBM. Accessed May 21, 2024. https://www.ibm.com/topics/artificial-intelligence.

  4. Shaner, Kyle. 2024. “Examining the potential benefits and dangers of AI.” University of Cincinnati. https://www.uc.edu/news/articles/2024/02/examining-the-potential-benefits-and-dangers-of-ai.html. 

  5. Stanford University. n.d. “SQ10. What are the most pressing dangers of AI?” One Hundred Year Study on Artificial Intelligence (AI100). Accessed May 21, 2024. https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1-0. 

  6. Sher, Gai, Ariela Benchlouch, and Barnes Cellino. 2023. “The privacy paradox with AI.” Reuters. https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/. 

  7. Yurdasen, Deniz. 2023. “How Artificial Intelligence (AI) Is Used In Biometrics.” Aratek. https://www.aratek.co/news/how-artificial-intelligence-ai-is-used-in-biometrics. 

  8. European Commission. n.d. “AI Act | Shaping Europe's digital future.” Shaping Europe's digital future. Accessed May 21, 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. 

  9. The White House. n.d. “Blueprint for an AI Bill of Rights | OSTP.” The White House. Accessed May 21, 2024. https://www.whitehouse.gov/ostp/ai-bill-of-rights/. 

  10. Matthews, Dylan. 2023. “The AI rules that Congress is considering, explained.” Vox. https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable. 

Bibliography:

EU Artificial Intelligence Act. n.d. “The EU Artificial Intelligence Act.” EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. Accessed May 21, 2024. https://artificialintelligenceact.eu/.

European Commission. n.d. “AI Act | Shaping Europe's digital future.” Shaping Europe's digital future. Accessed May 21, 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

European Parliament. 2024. “Artificial Intelligence Act: MEPs adopt landmark law | News.” European Parliament. https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.

IBM. n.d. “What is Artificial Intelligence (AI)?” IBM. Accessed May 21, 2024. https://www.ibm.com/topics/artificial-intelligence.

Matthews, Dylan. 2023. “The AI rules that Congress is considering, explained.” Vox. https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable.

Shaner, Kyle. 2024. “Examining the potential benefits and dangers of AI.” University of Cincinnati. https://www.uc.edu/news/articles/2024/02/examining-the-potential-benefits-and-dangers-of-ai.html.

Sher, Gai, Ariela Benchlouch, and Barnes Cellino. 2023. “The privacy paradox with AI.” Reuters. https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/.

Stanford University. n.d. “SQ10. What are the most pressing dangers of AI?” One Hundred Year Study on Artificial Intelligence (AI100). Accessed May 21, 2024. https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1-0.

The White House. n.d. “Blueprint for an AI Bill of Rights | OSTP.” The White House. Accessed May 21, 2024. https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

Yurdasen, Deniz. 2023. “How Artificial Intelligence (AI) Is Used In Biometrics.” Aratek. https://www.aratek.co/news/how-artificial-intelligence-ai-is-used-in-biometrics.