The use of Artificial Intelligence is growing exponentially, with the arrival of new software that is revolutionizing the way people interact with technology to improve their work.
When AI-based tools work productively, they open up a whole new world of getting work done, dramatically reducing the time spent on repetitive tasks and shifting the focus of business resources to more creative and higher-value-added activities.
Parallel to the benign and productive uses of AI, there can be evidence of uses that are distorted, dangerous and, in some cases, contrary to human rights. It is for this reason that the European Union has introduced the AI ACT, with which it aims to become the first political institution to establish a clear framework of rules on the sector, and aims to create a trickle-down effect that will hopefully affect regulatory processes in the United States, China and cascade to the rest of the world.
So let's take a look at what the AI Act produced by the European Parliament consists of, analyzing the main points contained in the regulation and trying to understand the implications for European companies.
The European Union introduces the AI Act
The European Union has taken a significant step in regulating artificial intelligence with the June 14, 2023, approval of the AI Act by the European Parliament1.
The AI Act is a regulation that aims to provide norms and standards for Artificial Intelligence applications that European citizens come into contact with.
Parliament's priority is to ensure that AI systems used in the European Union are secure, transparent, traceable, non-discriminatory, and environmentally friendly.
This regulation was a response to growing concerns regarding the impacts of AI on security, people's rights and European values. Although the AI Act is still subject to future amendments and negotiations with the European Commission and the European Council, the European Parliament's vote marks an important step toward the implementation of this new regulation.
This is not the first time that European institutions have grappled with the idea of AI regulation.
In April 2021, the European Commission had proposed the first EU regulatory framework for AI: the proposal called for AI systems that can be used in different applications to be analyzed and classified according to risk to users.
The different levels of risk would have resulted in more or less regulation.
So let's go over what new features were introduced in the text voted by the European Parliament a few days ago.
The main key points of the AI ACT
In addition to a total ban on the use of real-time biometric recognition technologies in public places, the AI Act introduces a classification of risks associated with AI applications, identifying three levels of risk.
First, systems that are intrusive, discriminatory, and put people at risk and violate European values are considered unacceptable risk and will not be usable within the borders of the European Union.
These are systems used for social scoring, i.e., to rank people based on their social behavior or personal characteristics; systems that can influence voters and the outcome of elections; predictive policing techniques based on profiling, location, or past criminal behavior; emotion recognition systems used by law enforcement in border management; voice-activated toys that could encourage dangerous behavior in children; and systems for the untargeted extraction of biometric data from the Internet or CCTV footage to create facial recognition databases.
Then there are the high-risk systems, which have a controversial impact on people's security and rights.
These include systems for biometric identification and categorization of individuals; systems for the management and operation of critical infrastructure; platforms for education and vocational training; systems for access to and enjoyment of essential private services and public services and benefits; systems for the management of migration, asylum and border control; and other systems.
These systems are not banned, but must be evaluated and meet specific requirements including transparency of use, and are also subject to controls during their life cycle.
Finally, minimal-risk systems are not subject to specific legal obligations, however, they should meet minimum transparency requirements that enable users to make informed decisions: after interacting with the applications, the user should be able to decide whether to continue using them. Detailed summaries of copyrighted data used for training generative AIs, such as Chat GPT or Midjourney, should then be published, making explicit what content is generated with AI software and blocking the generation of illegal content.
In addition, users will need to be informed when interacting with AI: this includes generative AIs, systems that generate or manipulate image, audio or video content, and even technologies that allow deepfakes to be produced.
Impact of AI ACT and prospects for companies
The AI Act represents a decisive step in how the EU wishes to adapt to and regulate the use of AI. The regulations aim to promote the responsible use of this technology while ensuring that fundamental rights are respected and individuals are protected. Penalties for violations of the AI Act are severe, with fines of up to €30 million or 6% of the company's annual turnover.
These penalties aim to ensure compliance and deter irresponsible use of AI.
In short, the text has passed definitively from a first EU institution and can be considered fairly consolidated. Areas of change will be few and will almost only concern facial recognition – where there is intense debate about possible exceptions to the ban – for example, regarding facial recognition on EU streets and borders for national security reasons or in cases of missing children.
The main impact on enterprises stems from the mechanism provided by the risk-based approach: enterprises will have to verify that the AI component of the products they want to market does not fall on the list of high-risk applications and, if this is the case, they will have to carry out a compliance assessment.
This assessment can be carried out either in-house or by a certified third party. Positively evaluated products will go to market with the CE stamp of conformity (physical or virtual). If the company makes radical changes to one of its high-risk AI applications, it will have to conduct a new conformity assessment.
In addition, companies that use AI in their products will have to structure an after-sales monitoring system to detect unforeseen critical issues.
Still, companies will have to store and make easily accessible to national authorities all documentation produced for conformity assessments.
According to early expert opinions, there are mainly two high-risk AI applications that companies will adopt: those for calculating credit/banking risk and those for CV analysis for hiring purposes.
And yet, this is not the final play in this long legislative process: the text voted on recently will function as the starting point for the beginning of a new negotiation – called a trilogue – with the other two European institutions, the Commission and the Council: the goal of the trilogue is to reach an agreement by the end of 2023 on a legislative proposal that is acceptable to both the Parliament and the Council, the so-called co-legislators. The European Commission will act as mediator, facilitating an agreement among the co-legislators.
This tentative agreement must then be adopted by each institution's formal procedures.
It is safe to say that the race for compliance by companies designing and using AI has already started: "There is a two-year grace period from entry into force, but companies need all of it to build processes for compliance with the regulation," Massimo Pellegrino, partner at Intellera, a specialized consulting firm, explained to Il Sole 24 Ore2.
Generative AI alone (Chat GPT and similar systems) can bring €4.4 billion in value to the global economy and save 60-70% of workers' time, according to a McKinsey report3. A value that companies are preparing to seize.
1. See the article by the European Parliament titled EU AI Act: first regulation on artificial intelligence
2. See the article by Sole 24 Ore titled Il Parlamento europeo approva l’AI Act, cosa cambierà per le nostre aziende?
3. See the report by McKinsey titled The economic potential of generative AI: The next productivity frontier