BLOG

Responsible AI in Financial Services

machine learning
  Scott Shimp        
machine learning
  August 2022          
machine learning                   Blog
financial data science

Many banks, fintech companies, and other financial industry players invest heavily in data-driven innovation, especially artificial intelligence (AI) and machine learning (ML) approaches. In our latest white paper, we explore how AI/ML is essential to success in the face of key trends in the industry, while it also poses risks. In this article, we ask: what about the larger context in which this innovation takes place? In this highly-regulated industry, what do the regulators have to say to encourage responsible approaches to AI/ML, and how should this factor into companies’ plans for digital innovation?

Regulatory Perspectives

financial services data analytics

In 2021, a group of five U.S. financial regulators published a request for information on “​​Financial Institutions' Use of Artificial Intelligence, Including Machine Learning.” Along with a series of specific questions, the authors provided a list of existing laws and regulations that they deemed relevant to AI/ML in the financial services context. While the RFI does not specify any next steps, the message was clear: regulators are watching closely as the financial industry continues to expand its use of AI/ML approaches.

Responses were gathered from dozens of industry participants and members of the public. Some stakeholders recommended against new regulation in their responses, citing existing regulations for model risk management. In contrast, other groups noted the gaps they perceive in today’s regulations and urged regulators to take new actions. Regardless of the view on regulation, respondents generally advocated for an approach to AI that avoids potential adverse impacts. This kind of framework is commonly called “responsible AI” by companies, researchers, and data scientists.

A year on from the submission of responses to this RFI, what has been the course of AI/ML regulation? Under the current administration, agencies are heavily focused on the potential for bias. For example, in October 2021, the Consumer Financial Protection Bureau (CFPB) used the occasion of a recent settlement with a bank, in conjunction with other agencies, to offer remarks on a broad initiative against “digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed.” As an example of the evidence of potential harm, CFPB observed that “a statistical analysis of 2 million mortgage applications found that Black families were 80% more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds.”

At one recent public event, a panel of regulatory officials reiterated that the existing rules for financial services could and will be applied to AI/ML applications. However, one official stated: “If they don’t have the appropriate governance, risk management and controls for AI, they shouldn’t use AI.” In other words, companies need to know what they are doing with AI/ML and put the proper framework in place to manage it.

Recent moves to increase oversight of AI/ML also go beyond the financial services industry. Shortly after the release of the RFI last year, the Federal Trade Commission (FTC) released a statement highlighting its role in enforcing relevant standards. In February 2022, the U.S. House of Representatives passed the Algorithmic Accountability Act, summarized as a bill “to direct the [FTC] to require impact assessments of automated decision systems and augmented critical decision processes.” In the language of the bill, “automated decision systems” often include AI/ML and related data-driven approaches, depending on the application. The legislation would require formal reviews of such systems with an eye to potential bias or other negative impacts on consumers. Finally, in March 2022, the National Institute of Standards and Technology (NIST) released a draft risk management framework for AI/ML systems for public comment. While the complete regulatory landscape is still coming into focus, it seems that clear broad-based federal legislation, rule-making, and standards would significantly bolster the implicit and explicit obligations that already exist for financial institutions.

Taking Responsibility

Given the developments we have observed above, what steps can a company take to put itself in a position to succeed without becoming a poster child for burgeoning efforts to regulate AI/ML?

First, companies should ensure they extend their model risk management practices to AI/ML models. This starts with clear definitions and policies around what should be considered a model, whether running in production behind a mobile app or executed manually from a Python notebook. The key is to have a complete inventory of models that includes non-traditional AI-powered ones so existing controls can be applied consistently. As we’ve previously explored, all three lines of defense - management, compliance, and internal audit - must have the conceptual understanding and common vocabulary to manage the risks associated with AI/ML models effectively.

financial services data analytics

In addition to keeping track of their models, companies can look to third parties for support. Many companies are offering AI-powered products that financial institutions can use in lieu of developing, deploying, and maintaining their models. For example, some companies are directly offering services related to responsible AI/ML, with capabilities to address issues such as explainability and bias. However, relying on vendors for support in this area does not remove the obligation of companies to understand how the models are used in decision-making and managing the risks. As one academic observer states: “Procuring responsible AI is not a question of sourcing from responsible vendors or adding requirements for testing and evaluation onto the functional requirements. Rather, responsible AI is a problem of developing an adequate management program.”

Whether systems are developed in-house or used as a service, we contend that the foundation of success in applying AI/ML to business problems in a responsible manner is twofold: training all stakeholders in the skills to manage new technologies and approaches and fostering communities of practice committed to responsible innovation with data. By investing in their people, financial institutions can give regulators and the public confidence that data-driven innovation will bring benefits for all.

 

finance whitepaper hero

Driving Digital Transformation in Financial Services

White Paper

Digitization is redefining our experiences in all aspects of life, from how we work to our expectations as consumers. Responsibly applying AI and ML tools to processes that shape such critical functions as lending decisions, risk assessment, compliance, and customer service requires a data-savvy workforce. A solid foundation of data science skills will help financial services organizations safely navigate this new digital terrain.

Subscribe to our newsletter

Data Society provides customized, industry-tailored data science training solutions—partnering with organizations to educate, equip, and empower their workforce with the skills to achieve their goals and expand their impact.

cross linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram