The integration of generative AI in finance offers significant benefits, including enhanced data processing and personalised services. However, it also poses challenges such as biases and cybersecurity risks, prompting the industry to take smaller but cautious steps. Photo: Shutterstock

The integration of generative AI in finance offers significant benefits, including enhanced data processing and personalised services. However, it also poses challenges such as biases and cybersecurity risks, prompting the industry to take smaller but cautious steps. Photo: Shutterstock

Generative AI is set to transform finance by improving risk management, customer service and operational efficiency. However, it requires careful implementation to mitigate inherent risks, thus leading to industry-wide adoption and deployment at a risk-averse speed.

In the realm of automated response technology, generative artificial intelligence (GenAI) has made a stunning leap forward in both speed and scope. It now handles complex and time-consuming tasks such as analysing, interpreting and creating numerical data, text, imagery, audio and even programming codes with remarkable efficiency. This advancement opens numerous possibilities across a multitude of sectors and varied applications.

This transformative capability could reshape institutional operations and user interactions, notably within the financial sector. It has the potential to significantly enhance various dimensions of finance, from advancing risk management to revolutionising customer service. Consequently, several euro area banks are actively investigating GenAI to augment their digital transformation initiatives.

However, alongside these remarkable potential benefits, this progress also brings risks, particularly if the underlying algorithms and models become too similar and are concentrated among only a few players. Such a situation can lead to increased operational and systemic biases. These potential risks underscore the importance of implementing GenAI thoughtfully and continuously monitoring its impact to ensure its positive effects while mitigating potential drawbacks.

GenAI

The latest generation of supercomputers and cutting-edge exascale computing technologies are great at handling huge amounts of data incredibly quickly. They’re used for things like predicting the weather, discovering new drugs and studying the stars, to name a few. But what sets GenAI apart is its neural network algorithms. These algorithms learn from data itself, improving over time. They’ve evolved from machine learning and deep learning techniques. By analysing vast datasets to uncover probabilistic patterns within interconnected parameters, GenAIs can make decisions more efficiently and accurately than ever before. Moreover, given the ‘statistically derived outcomes’ inherent to neural network algorithms, these models adeptly manage nonlinear variables like images, sounds and speech text, where strict adherence to a predetermined pattern is not mandatory.

There are essentially three key components in the GenAI value chain: the training data, the model itself, and the deployment or implementation step. Let’s take a closer look.

Training data

Within the financial world, data is highly structured and confined within limits of expression. This allows for more defined, extensive analyses and data-dependent decision-making, including real-time fraud detection and risk management enhancements. Yet, in both unstructured and structured data processing, if the training dataset has inherent biases or errors, the output would have significant quality issues.

AI algorithm

On the other hand, even if the training data is of high quality and without inherent biases, the AI algorithm could still introduce unintended biases, especially when repurposed with new training datasets intended for a different application. This phenomenon can occur due to various factors such as algorithmic design choices, data representation or underlying assumptions within the model.

Moreover, since GenAI operates through statistical iterations on nonlinear parameters, it’s anticipated that non-numeric outputs will also be iterative, meaning they’ll exhibit small differences. However, ensuring the robustness and replicability of these “predictions” or “interactions” remains a challenge. This brings us to the end users, who, as humans, may interpret the text or results differently, highlighting the need for a cautious and targeted approach.

Deployment

Lastly, let’s consider deployment or implementation. Interoperability between legacy systems and data curation between different institutions could be a challenge. In cases where deployment requires external expertise, the dependency could be critical under certain circumstances.


Read also


Opportunities and challenges

The benefits of GenAI are extensive, enhancing data processing capabilities, improving predictive accuracy and automating complex tasks traditionally handled by humans. Notably, GenAI is poised to significantly improve customer-facing services, particularly in creating personalised advisory services and managing complaints effectively. It will also play a crucial role in financial advice by minimising deductibility biases. This bias refers to people’s tendency to make financial decisions based on maximising tax benefits rather than purely financial considerations, in certain cases impacting their overall financial optimisation.

However, the statistical nature of GenAI introduces inherent risks, notably biases stemming from training data that may result in inaccurate predictions or discriminatory outcomes. These biases are compounded by the complexity of AI algorithms, making it challenging to fully comprehend, identify and address them effectively.

This lack of transparency and the limited interpretability of customised data or sentiment pose significant challenges for financial institutions. They struggle to explain and justify AI-based decisions, especially in cases where potential financial losses or vulnerabilities are at stake.

Furthermore, the increasing dependence on AI and GenAI for crucial financial tasks heightens concerns regarding operational risk and cybersecurity. Institutions face the potential for errors, system failures or malicious attacks due to an overreliance on AI-specific processing units and infrastructure, often without adequate human oversight.

The recently AI Act is a welcome development in the European Union as continued monitoring of emerging challenges and biases through targeted and efficient regulation is essential for the advancement of the AI and GenAI field. The critical factor lies in bridging the gap between innovation, the introduction of new functionalities and the establishment of robust and consistent regulation. This synchronisation is essential for ensuring the responsible and effective integration of AI technologies in financial systems.

In effect, the lack of transparency and a clear understanding of the outcomes of GenAI, along with associated risks and regulatory scrutiny, appear to limit the large-scale adoption and deployment of GenAI features in finance. Nonetheless, this cautious approach has its merits in maintaining the integrity and stability of the financial system.

This article was published for the Delano Finance newsletter, the weekly source for financial news in Luxembourg. .