Frequently Asked Questions

Responsible AI in Financial Services

What does "responsible AI" mean in the context of financial services?

"Responsible AI" in financial services refers to the implementation of artificial intelligence (AI) and machine learning (ML) systems in a way that extends risk management practices, ensures ethical innovation, and meets regulatory expectations. This includes avoiding adverse impacts such as bias, maintaining transparency, and fostering public trust. Regulators emphasize the need for proper governance, risk management, and controls for AI/ML applications. Source

What are the current regulatory trends for AI and ML in financial services?

Regulatory agencies in the U.S. are closely monitoring the use of AI/ML in financial services. In 2021, five financial regulators issued a request for information on AI/ML use, highlighting existing laws and the need for responsible approaches. Recent initiatives focus on preventing bias (e.g., digital redlining), enforcing existing rules, and proposing new legislation such as the Algorithmic Accountability Act of 2022, which would require impact assessments of automated decision systems. Source, Source, Source

How can financial institutions ensure responsible use of AI/ML?

Financial institutions should extend their model risk management practices to AI/ML models, maintain a complete inventory of all models (including non-traditional ones), and apply consistent controls. All three lines of defense—management, compliance, and internal audit—must have a shared understanding and vocabulary to manage AI/ML risks. Training stakeholders and fostering communities of practice are essential for responsible innovation. Source

Can third-party AI solutions fully address ethical concerns in financial services?

While third-party vendors offer AI-powered products with features like explainability and bias mitigation, financial institutions remain responsible for understanding how these models are used in decision-making and managing associated risks. Procuring responsible AI is not just about vendor selection; it requires developing an adequate management program and internal expertise. Source

Features & Capabilities

What products and services does Data Society offer for financial institutions?

Data Society provides hands-on, instructor-led upskilling programs, custom AI solutions, workforce development tools, and industry-specific training for financial services. Offerings include predictive analytics, generative AI, data visualization, risk management, compliance, and technology skills assessments. These solutions are tailored to address challenges such as regulatory compliance, risk management, and ethical AI adoption. Source

What integrations does Data Society support?

Data Society integrates with platforms such as Google Cloud (for cloud-based solutions and compliance), Anaconda (for Python-based analytics), Skillsoft (for training resources), NEO4J (for graph database analytics), Nvidia (for AI and machine learning capabilities), and Seeq (for advanced data visualization and analysis). These integrations enable seamless workflows and robust analytics for financial institutions. Source

Security & Compliance

How does Data Society address security and compliance for financial services?

Data Society prioritizes security and compliance by helping organizations evaluate cloud providers for regulations like HIPAA and FedRAMP, recommending hybrid deployment models to protect sensitive data, and emphasizing governance frameworks. Training programs include security awareness to ensure employees understand compliance responsibilities. Solutions evolve through continuous learning and monitoring to maintain accuracy and regulatory alignment. Source

Use Cases & Business Impact

What business impact can financial institutions expect from Data Society's solutions?

Financial institutions can expect measurable outcomes such as improved operational efficiency, enhanced decision-making, and substantial ROI. For example, Data Society's solutions have delivered 0,000 in annual cost savings (HHS CoLab case study) and improved healthcare access for 125 million people (Optum Health case study). Tailored training leads to sustainable workforce development and better project outcomes. Source, Source

What are some relevant case studies for financial services?

Relevant case studies include:

These examples demonstrate Data Society's impact on operational efficiency, workforce development, and measurable business outcomes in financial services.

Pain Points & Solutions

What common challenges do financial institutions face when adopting AI/ML?

Common challenges include misalignment between strategy and workforce capability, siloed data ownership, insufficient data and AI literacy, overreliance on technology without human enablement, weak governance, change fatigue, and lack of measurable ROI. Data Society addresses these through tailored training, advisory services, and solution design focused on people, process, and technology. Source

How does Data Society solve these pain points for financial institutions?

Data Society offers tailored, instructor-led upskilling programs to align workforce capabilities with leadership goals, advanced training on integration tools to reduce data silos, foundational literacy training for all employees, and advisory services for governance and accountability. Solutions are designed to deliver measurable outcomes and foster a culture of data-driven decision-making. Source

Implementation & Support

How easy is it to implement Data Society's solutions in financial services?

Data Society's solutions are designed for quick and efficient implementation. Organizations can start with a focused project, equipping a small, cross-functional team with tools and support for fast adoption. Onboarding is streamlined with live, instructor-led training and tailored learning paths. Automated systems require minimal maintenance, and training can be delivered online or in-person. Source

What support and training does Data Society provide to financial institutions?

Data Society provides ongoing support through mentorship, interactive workshops, dedicated office hours, and technical assistance via the Learning Hub and Virtual Teaching Assistant. Structured training programs and tailored learning paths ensure users are equipped to handle upgrades and maintenance. Continuous improvement is built into custom machine learning systems, which automatically retrain and monitor new data inputs. Source

Customer Experience & Proof

What feedback have customers given about Data Society's solutions?

Customers have praised Data Society for simplifying complex data processes and enhancing user confidence. For example, Emily R., a subscriber, stated: "Data Society brought clarity to complex data processes, helping us move faster with confidence." This feedback highlights the ease of use and effectiveness of Data Society's solutions. Source

Industries & Use Cases

Which industries does Data Society serve?

Data Society serves a wide range of industries, including financial services, government, healthcare, energy & utilities, media, education, retail, professional services, consulting, telecommunications, and aerospace & defense. Case studies demonstrate expertise and impact across these sectors. Source

KPIs & Metrics

What key performance indicators (KPIs) are used to measure the impact of Data Society's solutions?

KPIs include training completion rates, post-training performance improvement, alignment between business objectives and data/AI strategy, data integration across systems, employee data literacy scores, adoption rates of new tools, compliance audit scores, change adoption rates, and ROI per AI initiative. These metrics help organizations track progress and business impact. Source

Explore how financial institutions can responsibly implement AI and ML by extending risk management practices, training stakeholders, and fostering ethical innovation to meet regulatory expectations and build public trust.

Responsible AI in Financial Services

Many banks, fintech companies, and other financial industry players invest heavily in data-driven innovation, especially artificial intelligence (AI) and machine learning (ML) approaches. In our latest white paper, we explore how AI/ML is essential to success in the face of key trends in the industry, while it also poses risks. In this article, we ask: what about the larger context in which this innovation takes place? In this highly-regulated industry, what do the regulators have to say to encourage responsible approaches to AI/ML, and how should this factor into companies’ plans for digital innovation?

Regulatory Perspectives

Responsible AI in Financial Services

In 2021, a group of five U.S. financial regulators published a request for information on “​​Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning.” Along with a series of specific questions, the authors provided a list of existing laws and regulations that they deemed relevant to AI/ML in the financial services context. While the RFI does not specify any next steps, the message was clear: regulators are watching closely as the financial industry continues to expand its use of AI/ML approaches.

Responses were gathered from dozens of industry participants and members of the public. Some stakeholders recommended against new regulation in their responses, citing existing regulations for model risk management. In contrast, other groups noted the gaps they perceive in today’s regulations and urged regulators to take new actions. Regardless of the view on regulation, respondents generally advocated for an approach to AI that avoids potential adverse impacts. This kind of framework is commonly called “responsible AI” by companies, researchers, and data scientists.

A year on from the submission of responses to this RFI, what has been the course of AI/ML regulation? Under the current administration, agencies are heavily focused on the potential for bias. For example, in October 2021, the Consumer Financial Protection Bureau (CFPB) used the occasion of a recent settlement with a bank, in conjunction with other agencies, to offer remarks on a broad initiative against “digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed.” As an example of the evidence of potential harm, CFPB observed that “a statistical analysis of 2 million mortgage applications found that Black families were 80% more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds.”

At one recent public event, a panel of regulatory officials reiterated that the existing rules for financial services could and will be applied to AI/ML applications. However, one official stated: “If they don’t have the appropriate governance, risk management and controls for AI, they shouldn’t use AI.” In other words, companies need to know what they are doing with AI/ML and put the proper framework in place to manage it.

Recent moves to increase oversight of AI/ML also go beyond the financial services industry. Shortly after the release of the RFI last year, the Federal Trade Commission (FTC) released a statement highlighting its role in enforcing relevant standards. In February 2022, the U.S. House of Representatives passed the Algorithmic Accountability Act, summarized as a bill “to direct the [FTC] to require impact assessments of automated decision systems and augmented critical decision processes.” In the language of the bill, “automated decision systems” often include AI/ML and related data-driven approaches, depending on the application. The legislation would require formal reviews of such systems with an eye to potential bias or other negative impacts on consumers. Finally, in March 2022, the National Institute of Standards and Technology (NIST) released a draft risk management framework for AI/ML systems for public comment. While the complete regulatory landscape is still coming into focus, it seems that clear broad-based federal legislation, rule-making, and standards would significantly bolster the implicit and explicit obligations that already exist for financial institutions.

Taking Responsibility

Given the developments we have observed above, what steps can a company take to put itself in a position to succeed without becoming a poster child for burgeoning efforts to regulate AI/ML?

First, companies should ensure they extend their model risk management practices to AI/ML models. This starts with clear definitions and policies around what should be considered a model, whether running in production behind a mobile app or executed manually from a Python notebook. The key is to have a complete inventory of models that includes non-traditional AI-powered ones so existing controls can be applied consistently. As we’ve previously explored, all three lines of defense – management, compliance, and internal audit – must have the conceptual understanding and common vocabulary to manage the risks associated with AI/ML models effectively.

Responsible AI in Financial Services

In addition to keeping track of their models, companies can look to third parties for support. Many companies are offering AI-powered products that financial institutions can use in lieu of developing, deploying, and maintaining their models. For example, some companies are directly offering services related to responsible AI/ML, with capabilities to address issues such as explainability and bias. However, relying on vendors for support in this area does not remove the obligation of companies to understand how the models are used in decision-making and managing the risks. As one academic observer states: “Procuring responsible AI is not a question of sourcing from responsible vendors or adding requirements for testing and evaluation onto the functional requirements. Rather, responsible AI is a problem of developing an adequate management program.”

Whether systems are developed in-house or used as a service, we contend that the foundation of success in applying AI/ML to business problems in a responsible manner is twofold: training all stakeholders in the skills to manage new technologies and approaches and fostering communities of practice committed to responsible innovation with data. By investing in their people, financial institutions can give regulators and the public confidence that data-driven innovation will bring benefits for all.

Responsible AI in Financial Services

Driving Digital Transformation in Financial Services

White Paper

Digitization is redefining our experiences in all aspects of life, from how we work to our expectations as consumers. Responsibly applying AI and ML tools to processes that shape such critical functions as lending decisions, risk assessment, compliance, and customer service requires a data-savvy workforce. A solid foundation of data science skills will help financial services organizations safely navigate this new digital terrain.

Don’t wanna miss any Data Society Resources?

Stay informed with Data Society Resources—get the latest news, blogs, press releases, thought leadership, and case studies delivered straight to your inbox.

Data: Resources

Get the latest updates on AI, data science, and our industry insights. From expert press releases, Blogs, News & Thought leadership. Find everything in one place.

View All Resources