World's first comprehensive law on AI

The Artificial Intelligence Act or AI Act officially goes into effect today. The various requirements this law imposes will be applied step by step in the coming years. Tech companies, government institutions and other organizations will have a lot of work ahead of them.
The AI Act is the world’s first comprehensive law on artificial intelligence. The goal is to restrict the use of AI technology in Europe to safeguard fundamental civil rights such as privacy, and to put a stop to potential dangers like discrimination and exclusion.
To achieve this, artificial intelligence is divided into three risk levels. AI Technology developed to get in touch with people or generate online content, such as chatbots and deepfakes, are considered low risk and are subjected to transparency obligations. This means people must be informed if such technology is used.
High risk AI applications, like smart cameras equipped with facial recognition or software designed to assess applicants, are allowed, but are bound to strict conditions regarding risk management, technical documentation, transparency and human supervision. In addition, a quality mark is mandatory.
Lastly, the AI Act mentions several examples of banned AI technology. Systems capable of assessing and scoring people’s mood and behavior, scraping software, predictive policing based on physical appearances and applications designed to manipulate human behavior are prohibited in the EU. The same goes for smart cameras installed in public spaces that can be used for real-time biometric identification.
However, there are exceptions to the rules. For instance, if there’s an indication that a terrorist attack is about to take place. The AI technology can also be used to locate victims of serious crimes, including human trafficking, kidnapping or the distribution of child sexual abuse material (CSAM).
The AI Act will be implemented in stages. Starting February 2025, the above mentioned banned AI applications are forbidden in Europe.
From then on, the AI Act also requires companies and organizations that use AI systems to have sufficient in-house knowledge of artificial intelligence. That means, for example, that a human resource employee must understand that an AI system may contain biases or ignore essential information, which could lead to an applicant being wrongly nominated.
In August 2025, the data protection authorities (DPAs) in all 27 EU member states must have appointed a regulator to supervise public AI systems. A year later, the supervision of high risk AI models will commence. In 2030 the AI Act is fully in effect.
Companies in violation of the AI Act can face hefty fines. A fine can amount to up to 35 million euro or 7 percent of global turnover, depending on the violation and the size of the company.
According to the Autoriteit Persoonsgegevens tech companies, government agencies, regulators and other organizations will have their work cut out for them. The Dutch DPA has published a roadmap explaining how to properly prepare for the AI Act.
Your email address will not be published. Required fields are marked