FRANKFURT SCHOOL

BLOG

Everything under control? AI and machine learning in the finance industry
Executive Education / 15 October 2024
  • Share

  • 2867

  • Print
Lecturer Financial Planner Days (fs.de/fpt); Executive Circle Asset & Wealth Management
Prof. Dr. Jan Neuhöfer is a Professor of Virtual Systems and Computer Graphics at HAW Hamburg. He has been working in the areas of virtual reality, augmented reality and artificial intelligence for over 20 years. Prof. Neuhöfer has gained valuable experience and built up an extensive network in international companies such as Accenture, Siemens and Dassault Systèmes, as well as in research and academia.

To Author's Page

More Blog Posts
Growth-Capital: Frühphasenfinanzierung als Schlüssel zur Zukunft
Agentic Artificial Intelligence: From Co-Pilots to Auto-Pilots in Auditing
The future is DORA – Four top training programmes

Artificial intelligence is increasingly shaping our daily lives and revolutionising the banking and finance industry. From customer service to asset and risk management, machine learning is changing the way banks offer their services and conduct business. From chatbots to robo-advisors and automated trading, the areas of application are diverse and offer enormous potential to improve the customer experience and optimise processes. At the same time, these developments raise questions about data protection and ethical use.

To objectively assess the current developments, it may be helpful to take a broader view. So let’s start with a brief introduction to the often-used distinction between “weak AI” and “strong AI” with examples from the world of finance, followed by a brief look at the upcoming regulatory framework and a general recommendation for action.

Weak A.I. – already the norm in finance

In principle, the use of “weak AI” to assist humans has long been state of the art, including in the finance industry. But it can be dangerous if we rely on it too much, or even blindly.

“Weak AI” refers to all those systems that can perform specific, clearly defined tasks using a fixed methodology. The main aim of such systems is to assist people in their (working) lives. This can involve taking over repetitive tasks, but also, for example, making suggestions when drafting texts and contracts.

A prominent example of “weak AI” in the banking world are so-called robo-advisors for low-cost, personalised investment advice. They analyse market data and customer profiles to provide tailored recommendations that would otherwise only be available to affluent customers with access to private wealth advisors.

Another example of the use of AI is fraud detection. AI systems can be used to recognise unusual patterns in transaction data that indicate fraudulent activity. These systems can be continuously improved through ongoing learning, enabling them to detect and prevent increasingly subtle methods of fraud.

Strong AI – the future is wide open

Strong AI goes one step further than weak AI, aiming to cooperate with humans on an equal footing or, if necessary, to replace and outperform them altogether. Its main characteristic is the ability to learn independently, plan strategically, act with foresight and reflect critically. However, this has only been achieved to a limited extent, if at all.

One example of the emergence of “strong AI” in the finance industry is automated or algorithmic securities trading. In principle, algorithmic trading is not new and the conditions for its use are clearly regulated in the Securities Trading Act. What is new, however, is the use of machine learning to analyse market trends and make trading decisions. This allows a wide range of data sources, including historical price data and news feeds, to be processed in milliseconds. This can create an almost superhuman competitive advantage, although it will become less significant as the use of this technology becomes more widespread.

Science is still a long way from creating a general “strong AI” that actually comes close to the human brain and its diverse abilities. This is mainly because we still do not have a comprehensive understanding of how the biological role model actually works.

 

Side effects of strong AI and the EU regulation

It should generally be noted that machine learning can be used to develop powerful systems without requiring a great deal of user expertise. However, these systems are so complex that it is difficult for humans to understand them. As such, a high level of expert knowledge, experience and, above all, a sense of responsibility is required when developing and using them.

The European Union has recognised the need to establish clear rules for the use of AI, particularly in sensitive areas such as finance. With this in mind, work on a risk-based regulation on artificial intelligence began in 2019. The regulation was adopted by the 27 EU member states in its final form on 21 May 2024 and came into force on 1 August 2024. It differentiates between AI systems with

  1. minimal risk such as competitors in computer games,
  2. limited risk such as advisory chatbots,
  3. high risk such as AI-based credit scoring and
  4. unacceptable risk, such as AI-based evaluation of human social behaviour (so-called ‘social scoring’).

A European Artificial Intelligence Committee, working in conjunction with national authorities, will be responsible for monitoring compliance with these regulations. While these regulations are intended to increase consumer safety, there are concerns that they could also have a negative impact on the innovative strength of European companies.

The high and increasing energy demand of machine learning and its impact on the global climate should also not be overlooked. This is an additional challenge for banks and companies that are committed to sustainable business practices.

What can be done?

Due to its high performance and broad range of applications, the use of AI will soon become standard in many areas of working life, so that its effective use will be crucial for the competitiveness and long-term prospects of each individual.

It is therefore essential for board members and executives to start exploring the possibilities and challenges of today’s AI systems as soon as possible. A proactive approach that takes into account both the potential and the risks will be crucial to remaining competitive in an increasingly digital world.

—————————————————————————————-

This text is a revised version of the article (in German) Keine Angst vor KI. Was man heute wissen sollte, damit es morgen kein böses Erwachen gibt (Don’t be afraid of AI. What you should know today to avoid a rude awakening tomorrow) by the same author in the B2B Branchenbuch from 24 August 2024.

0 COMMENTS

Send