Alessio Azzutti

I was delighted and fortunate to engage with with Alessio Azzutti, lecturer in Law and Technology at the University of Glasgow and panelist on the event “Artificial Intelligence: A Global Landscape” during which he provided an overview of the European Union’s AI Act.

Below is a summary of Alessio’s contribution:

In the European Union, public regulation, as opposed to industry self-regulation, is generally perceived favorably by society. The initial strides toward AI regulation were initiated by the European Commission, later taken up by its other primary institutions—the European Council and the European Parliament—leading to the eventual development of the EU AI Act.

The beginning of the political journey toward the EU AI Act can be traced back to at least 2019 with the publication of the High-Level Expert Group on Artificial Intelligence’s Ethics guidelines for trustworthy Artificial Intelligence. This report first outlined key aspects of AI governance, including human agency and oversight, technical robustness, safety and privacy, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability.

A further significant milestone in this trajectory was the 2020 White Paper, which delineated policy options to foster trustworthy AI adoptions and excellence and emphasized key legal issues such as liability and harm caused by AI systems.

In April 2021, the Commission released the first draft of the EU AI Act, thus initiating the trialogue negotiations among the three central EU institutions—i.e. the same Commission, the European Council, and the European Parliament—ultimately culminating in the political agreement of December 2023.

While the final version of the EU AI Act is still being drafted, the definition of AI is expected to align with that of the OECD, which defines an AI system as a “machine-based system that, for explicit or implicit (human-defined) objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. The AI Act’s scope focuses on the potential harms caused by AI systems to people’s safety and fundamental rights, safeguarding democracy, the rule of law, and the environment. Notably, the AI Act extends its reach beyond the EU and targets both AI providers and users.

Certain exceptions, however, exist, including national security, military defense, research and innovation, and specific areas of open-source AI. The EU AI Act classifies AI systems into three main categories with distinct and proportional regulatory requirements based on the level of risk.

Unacceptable Risk AI Systems: Prohibited by legislation, encompassing social credit scoring, education, workplace manipulation, biometric categorization, and predictive policy mass surveillance, however with some important exceptions.

High-Risk AI Systems: Subject to stringent requirements covering compliance obligations, registration on an EU public database, data governance, transparency, user instructions, technical documentation, risk and quality management, human oversight, explainability, auditability, human-in-the-loop, technical robustness, and cybersecurity requirements.

Limited Risk AI Systems: Entailing transparency obligations, including a duty to inform users of their interaction with AI-based agents, as in applications such as chatbots and emotional recognition.

Minimal Risk AI Systems: Carry no obligations but may involve adherence to voluntary requirements to promote best practices.

Moreover,General-Purpose AI systems, particularly those posing systemic risks, are defined quantitatively and subject to specific legal and regulatory treatment.

The applicability of the AI Act to the financial services industry

There are several ambiguities regarding the future application of the AI Act to the various use cases in the financial services industry. The AI Act only defines ‘credit scoring’ as a ‘high risk’ AI application in finance. While consumer-facing financial services will likely be subject to the scope of application of the AI Act, wholesale financial services present more challenges. This seems particularly true of certain high-stake AI applications, such as financial trading, entailing potentially significant risks, including market integrity and financial stability.

One good case in point is the context of financial market manipulation through AI. Given, for instance, the strong links between social media and capital markets, recent events like the GameStop saga underscore the impact of web-world dynamics on finance. While the AI Act addresses novel issues such as deep fakes, more is needed. A comprehensive legislative approach, including digital service packages and platform content moderation, is necessary to navigate the intricate relationship between finance and the web.

On the use of AI to enhance financial compliance, such as in KYC and AML

The use of AI to improve regulatory compliance, for example, in Know-Your-Customer (KYC) and Anti-Money Laundering (AML) practices, presents complexities. While greater availability of consumer data aids in identifying suspicious patterns, hidden risks of discrimination and unfair use of sensitive data may emerge.

Potential conflicts may arise between the EU AI Act and banking laws, as the former introduces heightened requirements for applications processing personal data, resulting in increased compliance costs. The combined effect of different regulations remains ambiguous, contingent on the tools firms deploy and the associated implications for effective regulatory compliance.

The views expressed in this article are those of Alessio Azzutti.

Leave a Reply