The third era of alert adjudication

A collaborative article by Chartis Research and Silent Eight

A collaborative article by Chartis Research and Silent Eight

Jump to: The second era | The emerging third era | Assessing AI-powered solutions

In the past decade, alert adjudication in financial crime compliance has undergone significant evolution, transitioning from manual human intervention to scoring systems, and finally arriving at today’s third era – defined by automation and explainable artificial intelligence (XAI). This third era emphasizes the need for efficiency, transparency and adaptability in handling high alert volumes while ensuring precise risk detection and regulatory compliance. By harnessing cutting-edge AI agent solutions, financial institutions can now navigate these challenges with systems designed for robust decision-making and operational excellence. This paper explores the journey to the third era and examines how firms can employ innovative AI-powered solutions to lead in this space.

The first era: manual intervention

Historically, alert adjudications were conducted by analysts who manually reviewed and assessed the alerts generated by Know Your Customer (KYC) screening, payments and transaction monitoring systems. Legacy technologies were subject to the limitations of matching algorithms that could not cope with multiple datasets and languages. This led to high alert volumes and many false positives that had to be reviewed – an approach that was expensive, time-consuming and prone to human error. The volume and complexity of alerts overwhelmed human adjudicators, leading to delays, backlogs and critical alerts being missed. While the move to cheaper offshoring teams created some short-term cost benefits, it failed to tackle these fundamental issues as businesses grew.

Back to top

The second era: suppression and scoring systems

The emergence and development of suppression and scoring systems to manage large alert volumes ushered in the second generation of technology tools. This approach set thresholds to automatically suppress or disregard alerts below certain risk levels, with risk scores assigned to alerts based on certain factors. These included name-matches with KYC or sanctions watchlists, or transaction monitoring alerts based on transaction amounts, frequencies or suspicious patterns. These systems were designed to reduce false positives and escalate or prioritize higher-risk alerts for manual review and investigation, thus reducing the burden on adjudicators.

While these systems improved efficiencies and streamlined the alert adjudication process, problems remain:

  • Out-of-the-box score thresholds or methodologies that do not match a financial institution’s policies. Vendors may use methodologies to solve alerts that are at odds with banks’ risk appetites. This can result in a misalignment of screening results with the financial institution’s business and regulatory needs.
  • Red flags missed by score aggregation. When alert scores are aggregated from multiple data points, the process often averages or smooths out individual scores to produce an overall risk assessment. This tendency to bring scores closer to a mean value can dilute the impact of critical red flags or risk indicators. As a result, high-risk signals that might warrant further investigation can be overlooked, creating blind spots in risk detection. Effective solutions must ensure that significant anomalies or elevated scores are appropriately highlighted, rather than being lost in the aggregation process.
  • False negatives. These systems have the potential to generate false negatives, whereby alerts that should have been flagged as high-risk are mistakenly suppressed or assigned low scores. False positive alerts classified with a high score of nine out of 10, for example, imply that mathematically speaking the system may be wrong 10% of the time. Where there are significant alert volumes, therefore, systems are likely to produce false negatives as well.
  • Opaque decision-making. A lack of transparency can undermine the reliability and credibility of the decision-making process. Scoring systems often provide an overall score for an alert without offering a comprehensive breakdown of the underlying positive and negative factors, or the data points that influenced the score.

Back to top

The emerging third era of alert adjudication: automation and explainable AI

A human-centered approach to AI in sanctions screening and compliance prioritizes explainability, ensuring that analysts can understand and justify the decisions made by AI models. XAI techniques aim to make the reasoning behind model outputs transparent, particularly in such critical compliance functions as alert adjudication. This is often achieved by combining rule-based systems, which provide clear decision paths, with learning-based models such as natural language processing (NLP)-driven tools that adapt and improve over time.

While deep learning neural networks can achieve high accuracy, their complexity often makes them difficult to interpret. This ‘black box’ nature has historically limited their role in compliance settings. By integrating explainable elements and establishing transparent decision pathways, financial institutions can deploy advanced AI solutions while maintaining the trust and control necessary for effective sanctions screening and risk management. Financial institutions need to explain outcomes to regulators and stakeholders, as failing to do so can incur high costs. However, XAI features have focused on making AI and neural network models understandable by humans. Some notable techniques for this, as applied to alert adjudication, include:

  • Identifying feature importance. Identifying which input features are most influential in determining the output of the model (i.e., customer demographics, transaction history, etc.)
  • Model visualization. Creating graphical representations of an AI model’s internal structure and decision-making processes to make its operations more understandable to human users. In the context of KYC, this could include visualizing how customer data flows through different stages of analysis, and highlighting such key decision points as identity verification, risk scoring and alert generation. These visual tools help analysts to trace how conclusions are reached and identify potential biases.
  • Rule-based explanations. Generating human-readable rules or decision trees that mimic the behavior of the AI model.
  • Local/global explanations. Providing explanations for individual predictions or decisions made by the AI model, whether these are individual adjudications or reflect the behavior of the model over its entire dataset.

Back to top

Assessing AI-powered solutions in compliance adjudication

Banks continue to face technical challenges when integrating explainability into their AI pipelines. While AI brings clear benefits in enhancing efficiency and accuracy, ensuring transparency and accountability remains critical in sectors such as banking. The next generation of AI-driven adjudication is moving toward more transparent, human-centric models that blend XAI techniques with hybrid approaches, combining rule-based logic and machine learning to provide clearer insights into decision-making.

This next generation focuses on dynamic visualization tools, natural language explanations and adaptive learning frameworks. Automated feedback loops refine these models by incorporating analyst reviews and evolving policy requirements, creating more flexible and responsive adjudication processes.

For banks, sharing their experiences with regulators and other governing bodies is increasingly seen as a proactive step in addressing these challenges. By openly discussing their struggles and successes in implementing XAI, banks can contribute to the development of regulatory frameworks that strike a balance between fostering innovation and ensuring accountability. Moreover, collaboration between banks and regulators fosters a deeper understanding of the technical intricacies of AI systems, enabling regulators to make informed decisions that promote responsible AI deployment.

Ultimately, prioritizing explainability in AI systems not only enhances regulatory compliance but also strengthens customer trust. By sharing their experiences and engaging in dialogue with regulators and other stakeholders, banks can collectively start to harness the transformative power of AI. At the same time, they can uphold the highest standards of transparency and accountability, thus safeguarding customers’ best interests in an increasingly AI-driven financial landscape.

Back to top

About Silent Eight

Silent Eight specializes in addressing the complex market challenges of financial crime compliance with Agentic AI solutions tailored to adjudication. It focuses on the requirements of the third era of alert adjudication by leveraging Agentic AI to deliver explainable and precise decision-making processes. These AI-driven systems are designed to provide dynamic, real-time adjudication capabilities that adapt to evolving risk parameters, enhancing both efficiency and compliance. And by integrating NLP and explainable decision pathways, Silent Eight aims to empower financial institutions to identify and address high-risk alerts without sacrificing transparency. Its solutions help to ensure that key decision points are clearly visualized, offering analysts a deep understanding of how alerts are adjudicated.

Silent Eight is already addressing firms’ challenges by delivering AI-powered adjudication solutions that integrate explainability at their core. By combining advanced machine learning with rule-based logic, Silent Eight ensures that financial institutions can trace and justify decisions, meeting both regulatory and operational demands.

Silent Eight’s solutions also include adaptive feedback loops and visualization tools that refine models based on analyst input and evolving policy requirements, setting a benchmark for effective AI deployment in compliance adjudication.

Learn more about Silent Eight’s groundbreaking AI adjudication at silenteight.com.

Back to top

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.

You need to sign in to use this feature. If you don’t have a Chartis account, please register for an account.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here.