What are Europe's landmark AI regulations?
By Foo Yun Chee
BRUSSELS, Dec 9 (Reuters) -European Union policymakers and lawmakers clinched a deal on Friday on the world's first comprehensive set of rules regulating the use of artificial intelligence (AI) in tools such as ChatGPT and in biometric surveillance.
They will thrash out details in the coming weeks that could alter the final legislation, which is expected to go into force early next year and apply in 2026.
Until then, companies are encouraged to sign up to a voluntary AI Pact to implement key obligations of the rules.
Here are the key points that have been agreed:
So-called high-risk AI systems - those deemed to have significant potential to harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law - will have to comply with a set of requirements, such as undergoing a fundamental rights impact assessment, and obligations to gain access to the EU market.
AI systems considered to pose limited risks would be subject to very light transparency obligations, such as disclosure labels declaring that the content was AI-generated to allow users to decide on how to use it.
USE OF AI IN LAW ENFORCEMENT
The use of real-time remote biometric identification systems in public spaces by law enforcement will only be allowed to help identify victims of kidnapping, human trafficking, sexual exploitation, and to prevent a specific and present terrorist threat.
They will also be permitted in efforts to track down people suspected of terrorism offences, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation and environmental crime.
GENERAL PURPOSE AI SYSTEMS (GPAI) AND FOUNDATION MODELS
GPAI and foundation models will be subject to transparency requirements such as drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for algorithm training.
Foundation models classed as posing a systemic risk and high-impact GPAI will have to conduct model evaluations, assess and mitigate risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.
Until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.
The regulations bar the following:
- Biometric categorisation systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, race.
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- Emotion recognition in the workplace and educational institutions.
- Social scoring based on social behaviour or personal characteristics.
- AI systems that manipulate human behaviour to circumvent their free will.
- AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation.
SANCTIONS FOR VIOLATIONS
Depending on the infringement and the size of the company involved, fines will start from 7.5 million euros ($8 million) or 1.5 % of global annual turnover, rising to up to 35 million euros or 7% of global turnover.
($1 = 0.9293 euros)
Reporting by Foo Yun Chee
Editing by Helen Popper
Disclaimer: The XM Group entities provide execution-only service and access to our Online Trading Facility, permitting a person to view and/or use the content available on or via the website, is not intended to change or expand on this, nor does it change or expand on this. Such access and use are always subject to: (i) Terms and Conditions; (ii) Risk Warnings; and (iii) Full Disclaimer. Such content is therefore provided as no more than general information. Particularly, please be aware that the contents of our Online Trading Facility are neither a solicitation, nor an offer to enter any transactions on the financial markets. Trading on any financial market involves a significant level of risk to your capital.
All material published on our Online Trading Facility is intended for educational/informational purposes only, and does not contain – nor should it be considered as containing – financial, investment tax or trading advice and recommendations; or a record of our trading prices; or an offer of, or solicitation for, a transaction in any financial instruments; or unsolicited financial promotions to you.
Any third-party content, as well as content prepared by XM, such as: opinions, news, research, analyses, prices and other information or links to third-party sites contained on this website are provided on an “as-is” basis, as general market commentary, and do not constitute investment advice. To the extent that any content is construed as investment research, you must note and accept that the content was not intended to and has not been prepared in accordance with legal requirements designed to promote the independence of investment research and as such, it would be considered as marketing communication under the relevant laws and regulations. Please ensure that you have read and understood our Notification on Non-Independent Investment. Research and Risk Warning concerning the foregoing information, which can be accessed here.