JPMorgan Chase spends $17 billion annually on technology and employs over 2,000 AI and machine learning specialists. Bank of America’s virtual assistant Erica has handled over 2 billion customer interactions. Goldman Sachs, Morgan Stanley, and Citigroup have all deployed internal generative AI platforms for their workforces.
AI in US banking is not experimental. It is operational, at scale, across every major function. But banking also sits under some of the most rigorous regulatory oversight of any industry — and regulators are watching AI closely.
The Office of the Comptroller of the Currency (OCC), the Federal Reserve, the FDIC, the Consumer Financial Protection Bureau (CFPB), and state regulators are all developing or refining AI-specific guidance. Getting this wrong does not just mean a bad outcome — it means enforcement action, consent orders, and front-page headlines.
À retenir
- US banks are deploying AI across fraud detection, credit underwriting, compliance, and customer service
- Model risk management (SR 11-7) is the foundational regulatory framework for AI in banking
- Fair lending laws (ECOA, Fair Housing Act) create specific risks for AI-driven credit decisions
- Regulators expect AI governance, explainability, and workforce competency — not just technical performance
AI use cases in US banking
Fraud detection and prevention
Fraud detection is the most mature AI application in banking. AI systems analyze transaction patterns in real time, identifying anomalies that rule-based systems miss. Key applications:
- Real-time transaction monitoring. Machine learning models analyze billions of transactions daily, flagging suspicious activity with far fewer false positives than legacy systems.
- Identity verification. AI-powered biometric authentication, document verification, and behavioral analytics reduce account takeover and synthetic identity fraud.
- Anti-money laundering (AML). AI dramatically improves suspicious activity report (SAR) quality while reducing false positive alert volumes by 50-70% (Deloitte Banking AI Report, 2025).
Banks are also using AI to combat emerging threats: deepfake voice fraud targeting wire transfers, AI-generated phishing at scale, and sophisticated social engineering.
$3.1B
in fraud losses prevented by AI-powered detection systems across the top 10 US banks in 2025
Source : American Bankers Association, 2025
Credit underwriting and scoring
AI is reshaping how banks make lending decisions:
- Alternative data models. AI models incorporate non-traditional data — rent payments, utility bills, cash flow patterns — to score thin-file borrowers who lack conventional credit histories.
- Faster decisioning. AI-powered underwriting reduces decision times from days to minutes for consumer and small business lending.
- Risk prediction. Machine learning models outperform traditional scorecards in predicting default, particularly for non-prime segments.
But AI credit models are a regulatory flashpoint. The CFPB, DOJ, and state attorneys general are actively investigating whether AI lending models produce disparate impact — discriminating against protected classes even without using prohibited variables directly.
Compliance and regulatory technology
AI is becoming essential for managing banking’s massive compliance burden:
- Regulatory change management. Natural language processing tools monitor and analyze the 50,000+ regulatory updates US banks face annually (Thomson Reuters Regulatory Intelligence).
- Trade surveillance. AI monitors trading activity for market manipulation, insider trading, and best execution violations.
- KYC and customer due diligence. AI automates identity verification, sanctions screening, and adverse media monitoring.
- BSA/AML. AI-powered transaction monitoring reduces false positive rates while improving detection of true suspicious activity.
Customer service and personalization
AI is transforming the customer experience:
- Virtual assistants. Bank of America’s Erica, Capital One’s Eno, and similar tools handle routine inquiries, balance checks, bill payments, and financial guidance.
- Personalized offers. AI analyzes customer behavior to deliver relevant product recommendations at the right time.
- Document processing. AI extracts information from loan applications, tax returns, and financial statements, reducing manual processing time by 80% or more.
70%
of US banks are using or piloting generative AI, with compliance and customer service as top use cases
Source : McKinsey Banking AI Survey, 2025
The regulatory landscape
SR 11-7: Model risk management
The foundational regulatory document for AI in banking is the Federal Reserve’s SR 11-7, issued in 2011 and still the primary framework. It establishes that:
- Models must be validated. Any quantitative model used for decision-making — including AI/ML models — must undergo independent validation.
- Model risk must be managed. Organizations need a model risk management (MRM) framework covering model development, implementation, use, and monitoring.
- Documentation is mandatory. Model design, assumptions, limitations, testing results, and ongoing monitoring must be documented.
AI and ML models create new MRM challenges because they are often more complex, less transparent, and harder to validate than traditional statistical models.
SR 11-7 was written before modern AI. Regulators are updating their expectations. The OCC’s 2025 guidance on AI in banking explicitly states that “the use of AI does not reduce the expectations for model risk management — it increases them.” Treat SR 11-7 as the floor, not the ceiling.
Fair lending and anti-discrimination
AI credit models face intense regulatory scrutiny under fair lending laws:
- Equal Credit Opportunity Act (ECOA). Prohibits discrimination in credit decisions on the basis of race, color, religion, national origin, sex, marital status, age, or receipt of public assistance.
- Fair Housing Act. Prohibits discrimination in residential mortgage lending.
- CFPB enforcement. The CFPB has made clear that AI-driven disparate impact is as actionable as intentional discrimination. In 2024, the CFPB issued guidance requiring lenders to provide specific, accurate adverse action reasons when AI models deny credit — “the algorithm decided” is not sufficient.
- Adverse action notice requirements. Under ECOA, lenders must explain why a credit application was denied. For opaque AI models, generating meaningful explanations is a significant technical and legal challenge.
NIST AI RMF alignment
The NIST AI Risk Management Framework provides the governance structure that banking regulators increasingly expect. Its four functions — Govern, Map, Measure, Manage — align with banking regulatory expectations:
- Govern maps to MRM governance and AI policy requirements
- Map maps to model inventory and risk identification
- Measure maps to model validation and bias testing
- Manage maps to ongoing monitoring and incident response
State-level regulation
Beyond federal regulators, state laws are adding complexity:
- Colorado AI Act (2026). Requires deployers of high-risk AI (including credit decisioning) to implement risk management, bias testing, and consumer disclosure.
- New York City Local Law 144. Requires bias audits for automated employment decision tools — a preview of requirements likely to expand to other domains.
- Illinois AI Video Interview Act. Regulates AI in hiring, including for banking positions.
Explainability: The core challenge
Banking regulators demand explainability — the ability to explain why an AI system reached a specific decision. This creates a fundamental tension with modern AI:
- Traditional models (logistic regression, decision trees) are inherently explainable
- Complex ML models (deep neural networks, ensemble methods, LLMs) deliver better performance but are harder to explain
- Regulators expect both — high performance and clear explainability
Approaches banks are using:
- SHAP (SHapley Additive exPlanations) values to explain individual predictions
- LIME (Local Interpretable Model-agnostic Explanations) for local interpretability
- Surrogate models that approximate complex models with simpler, explainable ones
- Feature importance analysis across the model portfolio
Explainability is not just a regulatory requirement — it is a risk management tool. If you cannot explain why your AI model makes a decision, you cannot identify when it is making bad ones. Start with the AI governance framework and build explainability into every stage of model development.
The workforce imperative
AI in banking creates an acute skills gap:
- Model developers need to understand fair lending law, not just machine learning
- Compliance officers need to understand AI well enough to audit it
- Loan officers need to understand AI-assisted recommendations well enough to override them appropriately
- Customer-facing staff need to explain AI-driven decisions to customers
- Board members and executives need sufficient AI literacy to exercise oversight
This is not a technology problem. It is a training problem.
Train your banking workforce with Brain
Brain delivers AI training built for the complexity of financial services. Practical modules covering AI fundamentals, model risk awareness, fair lending implications, shadow AI prevention, and regulatory compliance. Role-specific content for front office, risk, compliance, IT, and executive teams. Tracked, assessed, and audit-ready for your regulators.
Explore our plans to get started.
Related articles
AI for Insurance: Underwriting to Claims Guide (2026)
Transform every stage of the insurance value chain with AI. Covers underwriting, claims, fraud detection, and FCA/EU AI Act compliance.
AI for Construction: 5 High-Impact Uses in 2026
Cut costs and improve safety with AI in construction. Covers project planning, safety monitoring, quality control, cost estimation, and BIM integration.
AI for Energy: 5 Operations Transforming the Sector
Optimise grids, extend asset life, and trade smarter with AI. Covers predictive maintenance, renewable forecasting, energy trading, and demand response.