The European Commission, has, this week, unveiled a draft Regulation for the harmonisation of Artificial Intelligence (AI) governance across Member States. The draft Regulation follows on from the Commission’s 2020 White Paper consultation on the same subject.

Following the footsteps of the General Data Protection Regulation (GDPR), which set the gold standard for data protection legislation, it appears that the European Union (EU) is now attempting to cement its position as a global leader in AI governance.

In similar fashion to the GDPR, the Artificial Intelligence Act (AIA) has extraterritorial scope. AI systems that are marketed in the EU, used by EU nationals or where the output is produced in the EU shall fall under the scope of the AIA.

Interestingly, the AIA will not apply to AI systems for military use or public authorities and international organisations outside the EU.

Article 56 of the Regulation would, similarly to the GDPR, establish a dedicated European AI board with competent authority for the governance of the regime, which will assist with compliance. Each Member State will be required to assign or create a competent authority to ensure national compliance. A public register of high-risk AI systems will also be established.

The Regulation would also establish a number of AI regulatory sandboxes to promote the innovation and development of compliant AI. Competent national authorities would be required to work with a range of different companies to help achieve the EU’s goal of being a leader in AI.

The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services across Member States, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Recital 1

The proposed Regulation has five specific aims:

  1. Harmonised rules on the use of AI;
  2. Prohibition of the use of certain AI;
  3. Specific requirements for high-risk AI;
  4. Harmonised transparency rules for the use of AI; and
  5. Rules on market monitoring and surveillance.

Prohibition of the use of certain AI

Article 5 of the AIA outlines a list of prohibited AI use cases:

  1. An AI system that deploys subliminal techniques beyond a person's consciousness in order to materially distort a person's behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  2. An AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  3. Use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following; and
  4. The use of 'real-time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless used for the following objectives:

a.     targeted search of missing children

b.     prevention of imminent threat to life

c.     capture of someone wanted under a European Arrest warrant

These restrictions are arguably in line with principles of accepted AI use. However, the Article 5(1)(d) law enforcement provisions are likely to cause significant headaches for police forces that have been rolling out AI systems.

Furthermore, provisions on social score will in no doubt put the Commission on a collision course with China, where the beginnings of a social credit system are already in place.

High-risk AI systems

Article 6(1) of the AIA introduces a classification of “high-risk” AI.

Article 6(1) introduces two key requirements that must both be fulfilled in order for a system to be classified as high-risk.

(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II (regulated products such as toys, machinery etc); AND

(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

In determining this classification, Article 7 introduces a broad holistic approach for the Commission’s consideration.

Under Article 9, high-risk systems are required to have a risk management system for the entire lifecycle of the AI’s use. Furthermore, under Article 10, AI systems that use data to train their model, such as machine learning, are required to undergo similar data processing and collection assessments that exist in the GDPR regime. The AIA also introduces under Article 10(2)(f) a requirement for the examination of any AI bias.

Article 11 requires that high-risk systems include detailed technical documentation and that any decisions made by the AI (under Article 12) are logged. Article 13 requires that a high-risk system’s “operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately”. Furthermore, Article 14 requires significant human oversight of any decisions made by a high-risk system. Finally, Article 15 includes unsurprisingly a requirement for appropriate cybersecurity, continued accuracy and resilience.

Providers of these systems are required to complete an Article 43 conformity assessment to ensure that their high-risk system complies with the requirements of the AIA. Additionally, a range of parties from distributors to users are required to maintain similar levels of compliance with the AIA regime. Article 30 requires that a notifying authority be set up for each Member State which will be responsible for oversight of the conformity assessment and notification system.

Providers are further required to monitor use of their product in the market and to conduct continuous improvement based on the data collected (Article 61).

Transparency obligations

Article 52(1) would introduce a requirement for any AI system that interacts with natural persons to be designed in a manner that informs the individual that they are interacting with an AI system.

Article 51(3) tackles the problem of deepfakes by requiring that media generated through AI image manipulation is labelled as being generated by an AI system.

Regulatory oversight

Under Article 63, national supervisors will be required to monitor and report the use of AI systems within their Member State. Under Article 64, these authorities will have broad powers to compel providers to provide data, documentation and the source code of AI systems. National authorities and the European board will have the power to suspend the use of specific AI systems that are deemed to be dangerous, pose a risk to national interests or are non-compliant. Article 71 outlines a number of fines that can be levied for specific breaches of the AIA, with fines as high as €30 million or 6% of worldwide revenue (whichever is highest) available for some infractions.

Your move, UK

The United Kingdom will undoubtedly seek to accelerate the establishment of its AI governance framework. However, there is a risk that UK AI governance will be led by the requirement that, similar to data protection legislation, it is adequately compatible with the AIA regime. It is highly likely that the Commission’s first-mover advantage will once again allow it to define the baseline for technology regulation.