What Does the EU Artificial Intelligence (AI) Act Mean for Financial Services?

Avatar for Vera Romano
By Vera Romano
March 20th 2024 | 4 minute read

The European Parliament last week approved the world’s first binding law to regulate the use of artificial intelligence (AI), in a move with wide implications for the financial services industry.

AI technologies are powering a fast-expanding range of financial market functions, spanning everything from anti-money laundering and cybersecurity activities to customer interactions, tailoring wealth and asset management products to clients, risk analysis, investment decision-making, and order routing and trade execution. The European Union AI Act, noted global law firm Skadden, “will clearly impact technologies being used and considered in the [financial services] sector,” with firms’ obligations under the regulation based on each AI use case’s potential danger and degree of effect.

What the AI Act will do

The landmark Act “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field,” noted a Parliament press release announcing the law’s adoption. It seeks to make the technology more human-centric, not least by combatting discrimination and introducing greater transparency.

AI systems considered high risk – such as those used for essential banking services (e.g., credit scoring models and KYC/AML procedures), and pricing and risk assessments in insurance – will be forced to comply with strict requirements. Firms must assess and reduce risks, maintain use logs, and ensure systems are transparent, accurate and have human oversight. Citizens will have a right to submit complaints and receive explanations about decisions based on high-risk AI systems that affect their rights.

By contrast, AI-powered customer service chatbots are categorized as limited risk with fewer obligations.

General-purpose AI (GPAI) systems, such as large language models and generative AI applications, will need to meet various transparency requirements, including publishing detailed summaries of the content used to train the models they’re based on. Powerful GPAI models that could pose systemic risks will face additional requirements, including the need to perform model evaluations, assess and mitigate systemic risks, and report on incidents.

“Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such,” stated the Parliament press release.

AI applications that threaten citizens’ rights, “including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases”, will be banned.

Global patchwork of AI regulation

The EU’s AI Act, while the furthest advanced and most comprehensive, is just one of the measures being considered and undertaken by governments and regulators around the world.

The White House’s Executive Order on AI, issued on Oct. 30, 2023, specifically calls out financial services, observed the Skadden note. The Order tasks regulatory agencies with protecting American consumers from fraud, discrimination and threats to privacy from AI, and to address risks to financial stability. Transparency and explainability of AI models are high priorities. The U.S. Treasury is due to issue a follow-up public report on best practices for financial institutions to manage AI-specific cybersecurity risks by March 28, 2024.

Rather than introduce new legislation, the U.K. Government has proposed an agile, “pro-innovation” approach that will be regulator-led and principles-based. Its cross-sector, outcome-based framework for regulating AI is underpinned by five core principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Deloitte noted that U.K. regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by April 30, 2024. Targeted legislative interventions to address gaps in the regulatory framework, particularly regarding the risks posed by complex GPAI, may be introduced down the line.

Next steps for financial institutions

Despite the variation in international approaches and potential for regulatory divergence, authorities have evidenced some common concerns that financial market participants must address. These center around the complexity of data sources used in AI and potential biases in those, the need for robust governance to monitor and maintain data quality, ensuring transparency of financial models, risk monitoring, and prioritizing consumer protection and fostering trust.

Organizations will need to take stock of the AI systems they have and any they plan to introduce to assess how they comply with the emerging rules. Breaching the EU AI Act’s prohibitions, for instance, can result in fines of up to €35 million or 7% of total worldwide annual turnover, so areas of risk must be managed and mitigated.

Aligning with the AI Act though, noted FiSer Consulting, “means winning consumer trust, ensuring ethical AI operations, and potentially achieving competitive differentiation in the market.”

 

Vera Romano
Vera is responsible for driving Deep Pool’s overall marketing strategy. Vera is a qualified and proven marketer with 20+ years of experience at companies ranging from tech start-ups to large corporates, where she has led creative teams in developing and managing innovative brands through strategic campaigns to grow market share, sales and achieve targets.