Estimated reading time: 3 minutes, 10 seconds

Regulators Make Call for Input on Growing Use of AI in Banking


A cohort of federal regulatory agencies in the U.S. is seeking input regarding the future of artificial intelligence in financial institutions, according to the entities. The groups are asking the public to weigh in on a number of considerations, including customer service, fraud prevention, credit underwriting and other financial operations.

artificial intelligence 3382507 640 smallThe five agencies soliciting the comments are:

  • The Federal Reserve Board
  • The Federal Deposit Insurance Corporation (FDIC)
  • The Consumer Financial Protection Bureau (CFPB)
  • The Office of the Comptroller of the Currency (OCC)
  • The National Credit Union Administration (NCUA)

The request for information (RFI) seeks comment from consumer groups, trade associations and financial institutions themselves. The RFI, specifically, aims to better understand AI’s use with respect to machine learning, governance, risk management and challenges in “developing, adopting, and managing AI.” According to information from the solicitation, the above stated agencies are in favor “responsible innovation” from financial institutions.

From Twitter

Suriya Subramanian @SuriyaSubraman Mar 23

"Future of artificial intelligence in banking is already here - Global Banking And Finance Review: Future of artificial intelligence in banking is already here Global Banking And Finance Review #AI #artificialintelligence #Finperform"

Enumerated in the RFI are a number of potential benefits and risks associated with AI’s implementation in financial institutions. “AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques. AI can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques,” according to the solicitation. “Other potential AI benefits include more accurate, lower-cost, and faster underwriting, as well as expanded credit access for consumers and small businesses that may not have obtained credit under traditional credit underwriting approaches.”

The growing use of AI has been something that policy makers and the private sector have closely monitored. The economic, practical and ethical impact of its widescale implementation is likely to be substantial.

Recently, the Deloitte AI Institute announced a partnership with Chatterbox Labs toward the development of a “Model Insights technology for Trustworthy AI” to address some potential issues that might arise from its implementation, according to information from the professional services network.

The research aims to assist organizations address critical ethical considerations via the monitoring, validating and updating of their AI models. In its recent enterprise study, Deloitte found that 95% of the tech’s adopters expressed concerns about ethical risks associated with implementing it. Organizations spanning financial services, government, healthcare and life sciences have all accelerated the adoption of AI, notes the study.

To that end, the institute has been actively developing a framework to guide organizations seeking to responsibly implement the tech. The partnership with Chatterbox Labs is centered around the operationalization of that framework, according to the announcement. “By continuously monitoring enterprise AI models, Deloitte’s Model Insights delivers immediate insights that can uncover biases and vulnerabilities and allow organizations to validate that their AI models are ethical, trustworthy and fair,” it continues.

With respect the financial services industry, the regulatory RFI points out that many of the risks associated with the technology are similar to risks found when implementing other models and tools. “Many of the potential risks associated with using AI are not unique to AI. For instance, the use of AI could result in operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with the use of third parties, and model risk, all of which could affect a financial institution’s safety and soundness,” notes the call for input.

Using AI might also “create or heighten consumer protection risks, such as risks of unlawful discrimination, unfair, deceptive, or abusive acts or practices” related to relevant consumer protection law. The comment period will be 60 days following the solicitation’s Federal Register publication.

Read 55 times
Rate this item
(0 votes)

Visit other PMG Sites:

click me
PMG360 is committed to protecting the privacy of the personal data we collect from our subscribers/agents/customers/exhibitors and sponsors. On May 25th, the European's GDPR policy will be enforced. Nothing is changing about your current settings or how your information is processed, however, we have made a few changes. We have updated our Privacy Policy and Cookie Policy to make it easier for you to understand what information we collect, how and why we collect it.