Will Europe’s AI Act set new standards?

How the EU seeks to shape global rules for artificial intelligence while balancing innovation, ethics, and geopolitical competition.

In recent years, artificial intelligence (AI) has evolved at a vertiginous pace, but this unparalleled progress has been accompanied by concerns over its governance, risks and need for standards. In the rivalry between the U.S. and China, AI is now a strategic military, political and economic asset: one more reason to define a global regulatory framework. Among the main risks are the race for autonomous weapons, which could multiply cyberattacks and accidental conflicts, and the protection of personal data, often used without consent to train large language models. No less serious is the malicious use of AI: deepfakes, propaganda and mass surveillance. These critical issues have prompted governments, companies and research centres to call for common standards. The EU, backed by its reputation as a “regulatory powerhouse”, has chosen to take a leading role. With the adoption of the AI Act in June 2024, Brussels took a historic step, aiming to become a “norm entrepreneur” in the global governance of artificial intelligence. To understand its impact, it is worth examining the structure of the AI Act.

The structure

The European AI act is the first comprehensive legal framework adopted to specifically cover AI regulation, although the EU had already set precedent. In 2018 the European Commission established the High-Level Expert Group on AI (AI HLEG) which released the Ethics Guidelines for Trustworthy AI, defining it as lawful and ethical, and promoting the concept of “responsible competitiveness”. Although these guidelines had limited impact, as they were non-binding, they still represented an achievement. Then, in 2022, the Commission published the White Paper on AI, proposing an approach based on innovation, but with built-in liability safeguards. These steps defined the EU as a first mover in AI monitoring, and they also shaped the EU’s vision for trustworthy AI. 

The Act – which is being adopted gradually – is based on a risk-tiered approach, where obligations depend on the level of danger posed by specific AI systems. The identified risk-categories are the following: 

  1. Unacceptable risk: these systems are completely prohibited, because they violate fundamental values (for example, because of risk of possible discrimination or societal harm). 
  2. High risk: these systems are permitted, but under certain criteria. AI systems that impact essential services or critical infrastructure for example, are subject to conformity assessment procedures before the release and after too, by checking fulfilment of commitments.  
  3. Limited risk: in these cases, systems are subject to transparency obligations which enable user comprehension and informed decision-making. 
  4. Minimal risk: no additional obligations, to avoid over-regulation. 

Objectives and rationale

The rationale behind the Act builds on several elements: the urgent need to protect fundamental rights, ensure consumer safety, and provide ethical leadership. Another key pillar is the promotion of EU values, based on the concept of “human-centric AI”. Promoted by HLEG in 2019, it is based on the idea that AI systems should respect fundamental rights, ensure human agency (by putting humans at the centre of AI development and uses) while also serving societal interests in a transparent way. Indeed, this resonates deeply with the EU constitutional tradition, and AI norms should follow this same logic. At this point, the Brussels effect becomes evident: by using a middle-way approach between the U.S. (market driven) and China (state-centric) – and promoting a type of innovation that goes hand in hand with human rights – the EU aims to shape and export this governance model to other countries, without imposing it and ensuring its interests at the same time. The final aim is promoting global alignment around a more ethical and responsible AI governance. But to what extent will this approach prove successful?

Challenges and critiques

Although the potential benefits of the AI Act appear clear, numerous doubts and critical issues remain, and its success will largely depend on its practical application. It is unlikely that the U.S. and China will adopt the European approach anytime soon, and uncertainty prevails in other countries as well: the EU thus risks imposing very strict rules internally without really being able to influence global standards. In this scenario, investment and innovation could shift to more permissive markets, where companies would face fewer constraints.

The danger is that the EU’s ability to set the rules will lose strength as other countries develop their own regulations, which could happen soon given the growing pressure to respond to the risks of AI. It is true that the scope of the AI Act is limited to products intended for the European market, but international alignment could still emerge, as foreign companies that want to access the EU market must comply. However, they will not passively accept the rules but will seek to influence their definition to protect their own interests. This raises broader questions about the AI Act’s real ability to project itself beyond European borders. Provisions on high-risk or prohibited systems, as well as transparency obligations, are likely to remain confined to the European perimeter, producing a limited Brussels effect based primarily on market leverage.

Criticism has also come from the academic world. Legal expert Ugo Pagallo, for example, argues that the AI Act will not produce a real Brussels effect for two main reasons: on the one hand, the existence of alternative legal models, such as those in the U.S. and Japan, and on the other, the overly broad and vague definitions contained in the text, which risk becoming obsolete quickly, as well as making the regulatory framework unclear and detrimental to innovation.

However, the reservations do not end there. A crucial issue concerns the ability to enforce the law: it remains to be seen whether national authorities have the technical expertise and resources necessary to effectively supervise, especially with regard to large global technology giants. Finally, there are gaps in fundamental areas, such as general purpose AI (GPAI) and advanced autonomous systems, but also in the use of AI by governments in the field of surveillance and national security, which has been almost entirely overlooked. The real challenge will therefore be the adaptability of the AI Act to an ever-changing technological and geopolitical context: a goal that could prove more complex than expected.

The EU’s global AI diplomacy

Alongside the AI Act, the EU has launched an intense diplomatic effort to spread its vision of AI and strengthen the Brussels effect. Multilateral forums such as the G7 and G20 are discussing common risks and principles, while on a bilateral level, dialogue with the United States in the Trade and Technology Council and digital agreements with Canada, Japan and India stand out. The Canadian case is particularly significant: national law (AIDA Act) follows the European approach. To consolidate its role as a norm entrepreneur, Brussels has also proposed a Global AI Panel modelled on the IPCC, tasked with providing scientific assessments and recommendations to governments. The initiative is still in its infancy, but it confirms the EU’s ambition to establish itself as a global benchmark in the regulation of artificial intelligence.

Next challenges: what is left to do?

Despite the progress made, there are still several areas that the EU should focus on to strengthen the effectiveness of the AI Act. The first point concerns the promotion of innovation. Unlike the United States, which has long supported research through public funding and partnerships with the private sector, Europe risks losing competitiveness. It would therefore be important to create funding channels dedicated to projects that comply with the principles of the AI Act – transparency, fairness, human-centricity – and to support start-ups and small businesses. The EU could also act as a “first customer”, testing new compliant technologies, to reward compliance with its rules not only with regulatory constraints, but also with concrete incentives.

Another element concerns the need to define global standards. Deeper collaboration with international standardisation bodies, such as ISO, would help to clarify ambiguous points in the AI Act, facilitating implementation and ensuring global interoperability, as underlined by Pagallo. Having clear and shared standards in place before the legislation comes into full force would reduce compliance issues and give greater strength to the international projection of the European model.

Particular attention should also be paid to gaps in generalist and autonomous AI. The rules introduced at the last minute during the legislative process remain underdeveloped and do not really address the most sensitive risks, such as the misuse of models or the governance of systems capable of acting autonomously. Ignoring these aspects risks leaving dangerous grey areas in the regulation.

Finally, the EU should reflect on its regulatory flexibility. The European legislative process guarantees transparency, but its rigidity makes it difficult to keep pace with the rapid evolution of AI. More dynamic models, involving experimental guidelines tested in the field and then adapted, could allow for a better balance: robust rules that are capable of evolving in the face of new technological challenges.


Bibliography