The European Union has taken a giant leap in regulating artificial intelligence with its new AI Act. This landmark law, which becomes official on August 1, 2024, aims to create a safe and ethical environment for AI development and use across all 27 EU countries.
The AI Act is a comprehensive set of rules that covers the entire lifecycle of AI systems, from creation to use. It’s designed to promote innovation while protecting people’s rights and safety. The law takes a risk-based approach, meaning that the rules get stricter as the potential risks of an AI system increase.
Key points
One of the Act’s key features is its definition of AI systems. It describes them as machine-based systems that can adapt and make decisions to achieve certain goals. This broad definition helps ensure the law can apply to various types of AI technology.
The Act bans certain AI practices considered harmful or manipulative, such as using AI to exploit people’s vulnerabilities or create “deep fakes” without proper disclosure. It also sets strict rules for “high-risk” AI systems, which include those used in critical areas like education, employment, and law enforcement.
A notable aspect of the Act is its focus on “general-purpose AI models” (GPAI). These are powerful AI systems that can perform a wide range of tasks. The law requires providers of these models to assess and mitigate potential risks, especially for models deemed to have “systemic risk.”
To ensure compliance, the Act introduces hefty penalties for violations. Companies breaking the rules could face fines of up to €35 million or 7% of their global annual turnover, whichever is higher.

The implementation of the AI Act will be gradual. While it officially starts in August 2024, most provisions won’t be enforced until August 2026. This gives companies and organizations time to adapt to the new rules.
The EU’s AI Act is set to have a global impact. As the first comprehensive AI regulation of its kind, it’s likely to influence how other countries approach AI governance. Many tech companies may choose to align their global practices with EU standards to ensure compliance in this important market.
However, the Act also faces challenges. Critics worry it might stifle innovation or be too complex to implement effectively. There are also concerns about how it will interact with rapidly evolving AI technology.
Despite these challenges, the EU AI Act represents a significant step towards creating a framework for responsible AI development and use. It aims to strike a balance between fostering innovation and protecting societal values and individual rights.
As AI continues to play an increasingly important role in our lives, the EU’s approach to regulation could set a precedent for how we manage this powerful technology on a global scale.