top of page
Writer's pictureBergs&More

AI Act, green light from all EU countries: what impacts on businesses?



The use of artificial intelligence (AI) systems is now available to everyone - just think of the rapid spread of ChatGPT - and is an integral part of companies' business models, for which the applications are countless and range from automated machinery, to customer service chatbots, to the use of algorithms. These technologies offer enormous potential, but it is necessary to address the numerous concerns that arise from a security, ethical, and fundamental rights perspective. This is the context for the European Union's strategy on artificial intelligence, with the virtuous aim of containing the potential risks and damages that "intelligent" computer systems could cause. After years of negotiations, an agreement was finally reached in early December of last year, and on February 2nd, the 27 EU countries approved the latest (still provisional) text on the Artificial Intelligence Act (AI Act). Final approval of what will be the world's first legislation on artificial intelligence is expected this spring.

 

Key points and obligations

The Regulation aims to protect AI from the generative phase, regulating not only the final products, but the technology itself. To achieve this goal, a definition of AI as neutral as possible is provided, aimed at giving it a broad scope of application. The cornerstone principle followed by the Regulation is the "risk-based approach," which classifies AI systems according to their level of risk. This classification affects the level of regulation and compliance required, imposing companies involved in the development and implementation of these technologies to adapt to different standards. The AI Act distinguishes three categories of AI systems:

  1. AI systems that involve an unacceptable risk: These AI practices include behavior manipulation systems, mass surveillance systems, social scoring, or real-time biometric identification systems, which are prohibited by the Regulation.

  2. High-risk AI systems: These AI systems constitute a high risk to the health, safety, and fundamental rights of individuals. For example, they are used in sectors related to education, recruiting, public services, justice, and public safety. In particular, AI systems that meet both of the following conditions are considered high-risk:

  • The AI system is used as a security element of a product or constitutes the product itself;

  • The product is subject to a third-party conformity assessment for placing on the market.

The placing on the market of such systems is also subject to compliance with certain obligations and a rigorous prior conformity assessment, in compliance with which suppliers will have to:

  1. Identify, assess, and mitigate risks throughout the product's life cycle;

  2. Carry on tests to identify the most appropriate risk management measures;

  3. Before placing on the market or commissioning, draw up technical documentation demonstrating that the IA system complies with the regulation, keep it and maintain it up to date;

  4. Inform national authorities and, upon request, cooperate with them to demonstrate the system's compliance.

Finally, these systems shall be designed and developed in order to:

  1. Ensure transparency to users about their operation and provide information on how to use such systems;

  2. Allow effective supervision by individuals;

  3. Ensure an adequate level of accuracy, robustness, and cybersecurity.

These ex ante obligations are complemented by ex post obligations, which include:

  1. Registration of the AI system in an EU database, that will be managed by the Commission, in order to increase public transparency and strengthen supervision by competent authorities;

  2. Ongoing monitoring of system compliance to be carried out after placing on the market.

  3. AI systems presenting a low risk but still subject to minimum transparency requirements regarding their operation, development, and technical specifications.

 

Recipients of the AI Act

The AI Act applies to various subjects operating along the AI value chain, as well as recipients of specific obligations:

  • Providers, i.e., the legal entity subject to the most relevant obligations of the Regulation, who develop or have developed an AI system in order to place it on the market and who are responsible for its compliance;

  • Authorized representatives, importers, and distributors, who place an AI system on the market or make it available;

  • Users of such systems, for example, companies that use Ai system for managing their activities, who are required to comply with the instructions for use and to monitor the system's operation.

 

Penalties

In order to ensure effective implementation of the Regulation, there are three penalties ranges for non-compliant companies: (i) up to 30 million EUR or up to 6% of total worldwide annual turnover for non-compliance with the prohibition or non-compliance of the Regulation; (ii) up to 20 million EUR or up to 4% of total worldwide annual turnover in case of non-compliance with any requirements or obligations under the Regulation; (iii) up to 10 million EUR or up to 2% of total worldwide annual turnover for providing incorrect, incomplete, or misleading information.

 

Conclusions

The AI Act represents a historic step towards the regulation of artificial intelligence, although, as already pointed, final approval is still pending. In any case, once it comes into force, companies using or developing AI systems will have a two-year grace period to comply with the regulations. Therefore, the AI Act represents an opportunity for growth, allowing companies to gain a competitive advantage and strengthen their reputation by spreading confidence among their customers in adopting AI-based solutions.

 

Authors:   Consuelo Leonardi – Beatrice Olivo Contact: Avv. Eduardo Guarente e.guarente@bergsmore.com


bottom of page