EU reaches provisional agreement on the substantive content of AI legislation

Spread the love

The European Parliament and the Council of the EU have reached a provisional agreement on the implementation of the AI ​​Act. This sets out a series of rules that AI systems active in Europe must comply with. Certain applications are also blocked completely.

The full agreement has not yet been published, but it is there a press release released containing some of the rules and prohibited uses. The latter category includes artificial intelligence used to create social scores and AI ‘used to exploit people’s vulnerabilities, due to their age, disability, social or economic situation’. This also includes applications for emotion recognition in the workplace and educational institutions.

Facial recognition systems ‘that use sensitive characteristics’, such as political or religious beliefs, origin and sexual orientation, will also be banned, as will systems that ‘non-targetably’ scrape faces from the internet or security cameras to create facial recognition databases. The EU does mention some exceptions to biometric identification systems for law enforcement officers, who are allowed to use these systems under ‘strict conditions’.

Furthermore, systems that are considered ‘high risk’ must meet certain conditions. These are systems that have the potential to cause ‘significant damage to health, safety, fundamental rights, the environment, democracy and the rule of law’, according to the EU. This includes, for example, systems that can be used to influence voters and the outcome of elections. Such applications must, among other things, be subject to a mandatory impact assessment, although the other obligations are not mentioned in the press release. It does state that users have the right to file complaints against such systems and “receive explanations about choices about high-risk AI systems that affect their rights.”

Finally, rules have also been drawn up for general-purpose AI models, such as GPT-4, and systems that use those models. These must meet transparency requirements under the AI ​​Act. For example, the companies behind these models must provide detailed summaries of the content used for the training, ensure that copyright law is not violated, share ‘technical documentation’ and provide safeguards against the generation of illegal content.

If, according to the EU, a GPAI poses a ‘systemic risk’, it must meet stricter requirements. Model evaluations must then be carried out and the identified systemic risks must be tested and limited. In addition, the model must adversarial are tested, cyber security must be guaranteed and the makers must disclose energy use. In the event of ‘serious incidents’ the European Commission must be informed. It is not stated which criteria the systems must meet to be regarded as a ‘systemic risk’.

If AI companies break a rule, they risk fines of 35 million euros, or a maximum of 7 percent of their global revenue, up to a maximum of 7.5 million euros. The AI ​​Act still needs to be officially approved by the Council and Parliament. This is expected to happen before the end of the year. The law will not enter into force before 2025. It will probably be the first comprehensive AI law in the world.

You might also like
Exit mobile version