A collaborative article by Chartis Research and Silent Eight
Jump to: Generative AI | Managing the risks
The ability of generative pre-trained transformer (GPT) technology to create new content and answer challenging questions has captured the public’s imagination. Media hype notwithstanding, artificial intelligence (AI) has long been used by financial services firms to process large volumes of sensitive data quickly and accurately. Advanced techniques such as natural language processing (NLP) now play a key role in the fight against financial crime. In this paper, we examine some of the key challenges financial institutions face when managing financial crime risks – and how NLP can help.
Language models and NLP – the foundations
Historically, NLP has focused on the use of computers to understand, interpret and generate human language. It has progressed through distinct phases: from early rule-based systems reliant on handcrafted linguistic rules, to statistical models that use probabilistic methods, and later to neural networks and word embeddings that capture semantic relationships. The development of transformer-based architectures began with Attention Is All You Need, an important paper in the evolution of machine learning, published in 2017, which kickstarted the shift to large language models (LLMs). Transformers revolutionized NLP by allowing models to capture both short- and long-range dependencies in text, and models like GPT and bidirectional encoder representations from transformers (BERT) have built on this architecture.
Generative AI – creating something new
AI and LLMs hold considerable promise for a variety of anti-money laundering (AML) workflow compliance tasks. Specifically, they can:
- Automate document analysis. Organizations can leverage LLMs to automate the analysis of Know Your Customer (KYC) documents. In corporate/institutional KYC, for example, AI can be used to parse articles of incorporation, shareholder agreements or business licenses, delivering the relevant information automatically. At the individual level, AI is currently being used extensively in identity analysis. By employing NLP techniques, institutions can sift through vast numbers of documents, extracting such pertinent information as personal details, identification numbers and financial records.
- Generate summaries. To comply with KYC regulations, firms often have to review extensive and complex regulatory documents. To help in this, they can task LLMs with generating concise summaries of these documents, distilling key information and compliance requirements. GenAI can be applied to suspicious activity reports (SARs), for example, automating their preparation and analysis, and assembling them into the standardized formats required by regulators.
Managing the risks
For financial institutions, adopting GenAI and LLMs can come with several risks that they must consider carefully, including:
- Bias and fairness. GenAI can perpetuate biases present in training data and in the engineering prompts used to train the model itself. This can lead to outcomes that may disproportionately target certain populations or risk groups, and may skew results to such an extent that risks may not be identified.
- Model degradation. Without proper maintenance, the performance of LLMs may degrade, leading to inaccurate results. For example, retraining a model with updated data is necessary to keep it current with changing language patterns, regulatory requirements or evolving financial crime typologies.
- Hallucinations. A common issue with LLMs and GenAI is the presence of ‘hallucinations’, whereby a model predicts an outcome rather than retrieving or summarizing information, leading to incorrect answers to queries. False positives or negatives caused in this way can be extremely hazardous.
To mitigate these risks, firms must have robust oversight and governance frameworks. Regular testing and validation ensure that models perform as intended and remain compliant with regulatory standards. Establishing continuous feedback loops allows for ongoing refinement based on the real-world outcomes of models, while monitoring tools can detect and correct biases or errors early.
In managing the risks associated with adopting GenAI in AML processes, financial institutions must prioritize these risk management strategies. This involves implementing robust governance frameworks for the data, training and modeling of an LLM in a manner that matches the intended results and use cases. In this way firms can ensure fairness, transparency and accountability in AI decision-making. Financial institutions should also invest in stringent data privacy and cybersecurity measures to safeguard sensitive customer information and protect against potential breaches. Regular monitoring, auditing and retraining of GenAI models is essential to maintain performance and compliance with evolving regulatory standards.
About Silent Eight
From automating name and transaction screening decision-making to streamlining transaction monitoring investigations, Silent Eight’s AI solutions are focused on driving advances in compliance and risk management, with embedded transparency, auditability and explainable AI (XAI). This ensures that financial institutions can understand and trust the decisions made by AI models – critical factors in maintaining regulatory compliance and stakeholder confidence.
Silent Eight also leverages NLP to transform document and data analysis across the compliance workflow, enabling financial institutions to link and extract critical details from datasets, identify potential risks with greater accuracy and shorten case investigation times. Its solutions also include continuous monitoring, bias-detection tools and adaptive feedback mechanisms to refine models in line with evolving risks and policies. These safeguards, combined with robust privacy measures, enable firms to adopt advanced AI technologies without compromising fairness, accountability or data security.
To learn more about Silent Eight’s innovative AI solutions, visit silenteight.com.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@chartis-research.com to find out more.
You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@chartis-research.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@chartis-research.com