Commission Proposal for an Artificial Intelligence Act

On 21 April 2021, the European Commission published a Proposal for a Regulation laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (COM/2021/206 final). Considering that the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society, the proposal introduces rules for a coordinated European approach on the human and ethical implications of AI by proposing a legal framework for trustworthy AI.

From an instrument entitled AI Act, one would expect a definition as to what is considered AI and should be covered by said Act. The Proposal introduces a very broad definition of AI systems when it includes software developed by machine learning (“Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning”); logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and statistical approaches, Bayesian estimation, search and optimization methods.

Title II of the proposal bans certain “particularly harmful” AI-enabled practices, such as the deployment of “subliminal techniques beyond a person’s consciousness” or practices that “exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm”, as these practices are considered to contravene Union values. Instead of a general ban, specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement. The proposal also introduces the category of “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. The high-risk category includes inter alia use in transport, education, employment, credit-scoring or benefits applications. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment.

For EnCaViBS, it is of particular interest how the AI Act Proposal addresses cybersecurity. High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Accordingly, the Act stipulates that the technical solutions aimed at ensuring cybersecurity of high-risk AI should encompass measures to prevent and control attacks trying to manipulate AI specific assets such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, technical measures should therefore also take into account the underlying ICT infrastructure.

Compliance with the cybersecurity requirements addressed in Art. 15 AI Act Proposal High-risk can be shown by certification or a statement of conformity issued under a cybersecurity scheme pursuant to the EU Cybersecurity Act (Regulation (EU) 2019/881) in so far as the cybersecurity certificate or statement of conformity or parts thereof cover the requirements of Art. 15.

Leave a Reply

Your email address will not be published. Required fields are marked *