The European Union is leading the way in crafting legal regulations on artificial intelligence with what has been dubbed the “AI Act”.
Once put into practice, these regulations will be the world’s first set of comprehensive AI rules having gained approval from lawmakers.
The European Union, comprised of 27 nations, has been proposing regulations on AI for the past five years.
These laws aim to regulate high-impact, general-purpose AI models, along with high-risk systems.
On Wednesday 13 March, lawmakers approved of the AI Act’s establishment. 523 EU lawmakers voted in favour while 46 voted against. 49 lawmakers abstained.
The AI Act is expected to provide a global signpost for other governments that are aiming to regulate AI technology in their own countries.
A full outline of the AI Act is available for viewing, but of note is how the act evaluates AI products and services via a risk-based approach.
Unacceptable risk, such as biometric identification and categorization of people, or social scoring, will be banned, with the former potentially being allowed for law enforcement purposes.
High risk AI systems will be split into two categories. The first category is those that fall under the EU’s product safety legislation. This involves AI used in products such as toys, cars, and medical devices.
The second involves AI programs that will have to be registered in a database. These are AI that perform tasks such as management of infrastructure, education, employment, and law enforcement.
AI programs that are high-risk will require assessment and constant monitoring throughout their life cycle. People will have the right to file complaints to national authorities.
Meanwhile, generative AI such as OpenAI’s ChatGPT will not be classified as high risk but will have to comply with EU copyright law and transparency requirements.
This will require disclosing and labelling all generated content as AI, preventing the model from generating illegal content, and publishing summaries of data used for training to ensure copyright.
Advanced AI models such as GPT-4, also by OpenAI, will have to undergo evaluation.
Fines for breaking regulations could cost companies up to 35 million euros, almost $58 million AUD, or 7% of global turnover, depending on the type of violations.
The AI act is expected to be implemented in either May or June and rolled out in stages, with the complete set of regulations in force by mid-2026.