8 minute read | May.28.2025
Curious about how financial institutions, their service providers, and their federal regulators are using and overseeing machine learning and other AI tools?
A new GAO report[1] published on May 19th (the “Report”) provides a unique glimpse into AI use cases and risk mitigants being employed in the financial services industry, based in part on interviews with representatives from a variety of constituents, including the following entities.
Entity Type |
# Interviewed |
Depository Institution |
4 |
Federal Regulatory Agencies |
7 [2] |
Industry Trade Groups |
6 |
Research and Consulting Groups |
5 |
Consumer/Investor Advocacy Groups |
3 |
Technology Providers |
3 |
The Report also incorporates findings from government, non-government organizations, and academic studies and reports, as well as applicable law and interpretive guidance, to assess (i) the potential benefits and risks of using AI in financial services, (ii) how AI use by financial institutions is overseen by federal financial regulators, and (iii) the use of AI by the regulators themselves in conducting supervisory and market oversight activities.
The Report shares a number of specific ways in which banks and credit unions are already using AI, particularly machine learning, to enhance their internal operations and to improve and increase the services provided to their customers. The Report also discussed, in a more limited adoption, the use of generative AI (“GenAI”) by certain financial institutions.
For example, Gen-AI use cases have included:
Other AI use cases with external impact included:
The Report further documents reported cost-saving benefits for financial institutions from adopting AI tools. An AI provider indicated that its AI model “reduced the time and resources needed for financial institutions to make credit decisions by up to 67 percent.”[5] The GAO also cited a study that concluded “chatbots saved approximately $0.70 per customer interaction compared with human agents.”[6] However, trade associations cautioned that the costs associated with developing and/or acquiring AI may put some tools out of reach for smaller institutions.
While the Report highlights multiple significant benefits of adopting AI, interviewees also flagged a number of concerns:
Ultimately, representatives of financial institutions told the GAO that they have been “more cautious about adopting AI for activities where a high degree of reliability or explainability is important or where they are unsure how regulations would apply to a particular use of AI.”[9]
Financial Regulators Are Prioritizing AI Oversight and Adoption
Federal regulators told the GAO that existing laws, regulations, and guidance are applicable to financial institutions regardless of AI use – including model risk management guidance and third-party risk guidance issued by several of the prudential banking authorities.[10] The FDIC, Federal Reserve, OCC, and NCUA also said that examination of AI usage “would typically be reviewed as part of broader examinations of safety and soundness, information technology, or compliance with applicable laws and regulations.”[11] However, the NCUA flagged its relatively limited statutory authority to examine third-party technology service providers that provide services for credit unions, prompting the GAO to recommend that Congress consider addressing this gap through legislative action.
The Report found that some banking regulators have already been exercising supervisory oversight with respect to financial institutions’ use of AI. For example, the Federal Reserve, OCC, and CFPB said they have conducted multiple reviews of financial institutions focused on AI use, including an OCC’s “review of seven unnamed large banks from 2019 to 2023.”[12] According to the Report, the OCC’s review concluded these banks’ AI risk management practices were largely satisfactory, though their risk ratings for models did not explicitly capture AI-specific risk factors, and only limited information was available regarding efforts to mitigate AI-related bias. While the Report did not state what AI-specific risks could mean, potentially, such risks could include emergent risks associated with GenAI use, such as direct prompting attacks, backdoor poisoning attacks, jailbreaking, and training data extraction. The Report also stated that one of the CFPB’s reviews prompted the agency’s issuance of guidance regarding chatbots in 2023.
Finally, federal financial regulators themselves have been adopting AI to improve the efficiency and comprehensiveness of their oversight of regulated entities. Current use cases include:
Use Case |
Agencies |
Content creation and editing (graphics, videos, presentations) |
FDIC, Federal Reserve, NCUA, SEC |
Scoring job application essays |
FDIC |
Data analysis and pattern identification (e.g., stress testing; insider trading detection) |
FDIC, Federal Reserve, NCUA, SEC |
Document search and information extraction (e.g. consumer complaint analysis; credit report data extraction) |
FDIC, Federal Reserve, SEC |
Error or outlier detection |
Federal Reserve, NCUA, CFTC |
While representatives of each regulatory agency interviewed said they are not presently using GenAI for supervisory or market oversight activities, some are considering doing so in the future. For example, the OCC indicated it intends to use GenAI to help examiners “identify relevant information in supervisory guidance and assess risk identification and monitoring in bank documents.”[13] The Federal Reserve is also considering ways to explore GenAI in supervisory activities.
With respect to AI governance and planning, multiple regulators cited AI strategy documents they either have already developed or that are in development currently. In addition, some regulators, including the Federal Reserve and NCUA, have also adopted AI-specific policies. The OCC and SEC are in the process of following suit. That said, the regulators indicated they are not using AI to make autonomous decisions or as a sole source of input for supervision.
Key Takeaways from The Report
In summary, the GAO report provides insights into AI adoption in the banking and financial services space, including GenAI adoption, and whether, in the GAO’s opinion, banking regulators are reviewing and providing adequate guidance regarding AI use and AI risk management.
[1] GAO, Artificial Intelligence, Use and Oversight in Financial Services, GAO-25-107197 (May 2025) (the “Report”)
[2] The regulatory agencies interviewed were the Board of Governors of the Federal Reserve System (“Federal Reserve”), the Office of the Comptroller of the Currency (“OCC”), the Federal Deposit Insurance Corporation (“FDIC”), the National Credit Union Administration (“NCUA”), the Securities and Exchange Commission (“SEC”), Consumer Financial Protection Bureau (“CFPB”), and the Commodity Futures Trading Commission (“CFTC”).
[3] Id. at 9.
[4] Report, supra Note 1, at 10.
[5] Id. at 11.
[6] Id. at 12.
[7] Id. at 13.
[8] Id. at 14.
[9] Id. at 9.
[10] The Report noted, however, that several industry groups and financial institutions suggested that regulators should clarify AI-related guidance, including with respect to explainability expectations and adverse action notice requirements.
[11] Report, supra Note 1, at 21.
[12] Id. at 23.
[13] Id. at 35.