The Artificial Intelligence ACT, why does it matter to me?
The Artificial Intelligence Act, or the AI Act for short, went into effect on June 13, 2024.
Objective and place of the AI Act
The AI Act aims to promote the development of AI, especially also by startups, while curbing its dangerous aspects for the fundamental rights and freedoms of EU citizens. Brake and accelerate at the same time: we’ll see how that turns out! The AI Act is an EU Regulation that is complementary to other EU Regulations such as the AVG, the Digital Services Act, etc. Among other things, the AI Act addresses the providers of AI systems, the person responsible for their use, in other words, the person who deploys the AI system and the individuals involved. The question is what the AI Act adds in addition to, for example, the AVG.
What is an AI-system?
An AI system is a machine-based system designed to operate with different levels of autonomy. After deploying it, it can exhibit adaptability. The system can infer from the received inputs and generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments for explicit or implicit objectives.
The AI Act distinguishes several risk categories, namely:
Prohibited practices: AI systems for example: malicious manipulation, using vulnerable individuals or groups of individuals, social scoring, profiling for the purpose of making a risk assessment on the likelihood that an individual or group of individuals will commit crimes, unjustified social scoring and emotion recognition at work or in education have an unacceptable risk, creation of facial databases through untargeted scraping of camera images or the Internet, biometric classification of individuals and real-time remote biometric identification unless necessary for law enforcement in specific cases with court authorization;
High-risk: In addition, Chapter III of the AI Act sets forth various rules for determining whether or not an AI system is a high-risk system. When is an AI system a high-risk system?
A particularly relevant factor in classifying an AI system as a high-risk AI system is the extent of the adverse effects caused by the AI system on the fundamental rights protected by the Charter. Among these rights are: the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and association, the right to non-discrimination, the right to education, consumer protection, the rights of workers, the rights of persons with disabilities, gender equality, intellectual property rights, the right to an effective remedy and to a fair trial, the right to defense and the presumption of innocence, as well as the right to good administration. In addition to these rights, the specific rights of children should also be emphasized, The fundamental right concerning a high level of environmental protection, which is enshrined in the Charter and implemented in Union policies, should also be taken into account when assessing the seriousness of the harm that an AI system may cause, including with regard to the health and safety of persons. A complete list of scopes can be found in Annex III of the AI Regulation and listens closely.
Risk of deception: creators of AI systems that generate content should mark it as generated or manipulated, and deepfakes should be disclosed as such. Also, when deploying chatbots, for example, it should be made clear that you are dealing with AI.
Only big tech subject to the AI Act?
On the face of it, it may seem like the AI Act is primarily of interest to parties who market and AI system. But Section 2 of the AI Act also lists the user-responsible party as a standard addressee. The “user-addressee” is a natural or legal person, government agency, agency or other body that uses an AI system under its own responsibility, unless the AI system is used in the context of a personal non-professional activity;
This also brings the user being an enterprise into the picture. Not all obligations that apply to providers of AI systems, for example, apply to the user.
What about ChatGPT?
For example, ChatGPT, a widely used AI system, is rated as low-risk, in particular subject to transparency obligations, for example, regarding data input/output and unreliability. But freedom/ease is certainly not it, and it is advisable to use the obligations for users of high-risk systems as a guide.
Users of high-risk AI systems should meet the following obligations, among others:
- Ensure adequate AI literacy among their personnel and other persons operating and using AI systems on their behalf;
- Ensure appropriate technical and organizational measures are taken to ensure that AI systems are used in accordance with the instructions for use of those AI systems;
- Ensure that human supervision is assigned to natural persons who have the necessary competence, training and authority and receive the necessary support;
- Ensuring that the use manager has control over the input data, ensuring that the input data are relevant and sufficiently representative of the intended purpose of the AI system;
- Ensuring the monitoring of the operation of the AI system based on the user instructions and when the use manager has reason to believe that an AI system poses a risk or the use manager identifies a serious incident notify the provider and market surveillance authority;
- Ensuring the retention of logs for a specified period of time that are automatically generated, to the extent that such logs are under the control of the user-responsible controller;
- Ensuring that employee representatives and affected employees are informed that they will be subject to the use of an AI system prior to its use in the workplace;
- Ensuring cooperation with relevant competent authorities.
Conclusion
Therefore, when putting a high-risk AI system into operation, the entire organization must also be prepared for it.
Do you have questions about the above developments, IT law, or other legal issues. Our specialized lawyers will be happy to assist you. You can contact one of our lawyers by mail, telephone or the contact form for an initial consultation without any obligation. We are happy to think along with you.