The EU Artificial Intelligence Act came into force this week without much fuss. It creates obligations on all organisations that use or create artificial intelligence such as predictive analytics, virtual assistants, chatbots, HR tools, fraud detection tools, medical diagnostic tools, financial credit scoring, online retail recommendation engines, and targeted advertising.
The AI Act governs the use of and development of artificial intelligence. It takes a risk-based approach, categorising AI systems based on the risk posed and creating obligations commensurate with the risk to the fundamental rights of people.
Certain activities are deemed unacceptable and therefore prohibited. These include cognitive behavioural manipulation, emotion recognition, predictive policing, social scoring, biometric identification and categorisation, and real-time and remote facial recognition. Limited exceptions arise in relation to law enforcement.
High Risk activities include systems relating to toys, transport and medical devices (where the end product is subject to safety legislation) and AI systems that relate to critical infrastructure, education and training, employment, access to services, law enforcement, migration and application of the law. High risk systems must undergo assessment before release and periodically thereafter. Detailed records on development of data sets and functionality must be maintained. The system must allow for human intervention and oversight. The system must be registered on an EU database and individuals will be entitled to make complaints.
Limited risk activities include the use of ChatGPT and generative AI. Deployers or users will be required to comply with transparency obligations and copyright law. Users must disclose that content such as images, audio or video was generated by or modified by artificial intelligence. Chatbots must disclose that they are not human. This is to protect audiences from being misled. Providers must design the system so as to prevent illegal content from being generated.
More advanced generative AI systems must undergo evaluation prior to release, and any serious breaches of fundamental rights must be reported.
The EU AI Act will be enforced by way of an EU AI Office and national supervisory authorities. Fines for non-compliance can be up to 6% of the global annual turnover of the company.
Many businesses are using AI to some degree in their work, and would be classified as deployers under the Act. For this reason, they should be aware of the new laws and over the next few months conduct an assessment of their own use of artificial intelligence and mitigate any compliance risks that might arise.
The above is provided for information purposes and is not intended as legal advice. Fitzsimons Redmond LLP would be happy to discuss the needs of your business in relation to the EU AI Act. Please contact us on 01-676 3257.
By Lisa Quinn O’Flaherty
Partner at Fitzsimons Redmond LLP