The FDA has now authorized over 1,000 AI and machine learning-enabled medical devices. Epic, Cerner (now Oracle Health), and every major EHR vendor have embedded AI features into their platforms. Health systems from Mayo Clinic to Kaiser Permanente to community hospitals are deploying AI in clinical workflows, revenue cycle processes, and patient engagement.
Yet the gap between AI deployment and AI competency in healthcare is alarming. A 2025 American Medical Association survey found that while 65% of physicians interact with AI tools in their daily practice, fewer than 20% have received any formal training on how these tools work, where they fail, or what their limitations are.
In an industry where errors cost lives and regulatory penalties are severe, that gap is unacceptable.
À retenir
- Over 1,000 AI-enabled medical devices have received FDA authorization, with the pace accelerating
- AI healthcare applications span clinical decision support, revenue cycle, population health, and administration
- HIPAA and FDA oversight create unique compliance requirements that generic AI governance does not address
- Staff training is the single biggest gap — 65% of physicians use AI tools but fewer than 20% are trained on them
Where AI is delivering results in US healthcare
Clinical decision support
AI-powered clinical decision support (CDS) is the highest-impact use case. Systems like Viz.ai (stroke detection), Aidoc (radiology triage), and Paige (pathology) analyze medical images and flag urgent findings — often faster than human radiologists.
The evidence is compelling:
- Viz.ai reduced stroke treatment times by an average of 26 minutes across participating hospitals (Viz.ai clinical data, 2024)
- AI-assisted mammography screening reduced false positives by 5.7% in the Tomosynthesis Mammographic Imaging Screening Trial (JAMA, 2024)
- GE Healthcare’s AIR Recon DL reduced MRI scan times by up to 50% while maintaining or improving image quality
But CDS is not without risk. Hallucinations in medical AI have different consequences than in a marketing draft. And algorithmic bias — particularly in tools trained predominantly on data from specific demographic groups — can exacerbate health disparities.
1,000+
AI and ML-enabled medical devices authorized by the FDA, with over 80% in radiology and cardiovascular imaging
Source : FDA AI/ML Device Database, 2025
Revenue cycle management
AI is transforming the $4.1 trillion US healthcare revenue cycle:
- Coding automation. Natural language processing tools (like 3M’s coding solutions and Nuance DAX Copilot) analyze clinical notes and suggest ICD-10 and CPT codes, reducing coding errors and accelerating claims submission.
- Prior authorization. AI automates prior auth workflows, reducing the average 14-day turnaround that costs US health systems an estimated $35 billion annually (CAQH, 2024).
- Denial management. AI identifies patterns in claim denials and predicts which claims are likely to be denied before submission, enabling proactive intervention.
- Patient payment prediction. Machine learning models estimate patient financial responsibility and optimize collection strategies.
Population health and predictive analytics
Health systems are using AI to identify at-risk populations and intervene before acute events:
- Sepsis prediction. Epic’s sepsis prediction model and similar tools analyze real-time patient data to flag deteriorating patients hours before clinical signs appear.
- Readmission risk. AI models predict 30-day readmission risk, enabling targeted post-discharge interventions that reduce penalties under CMS’s Hospital Readmissions Reduction Program.
- Social determinants. AI platforms incorporate social determinants of health (SDOH) data to identify patients who need non-clinical support — transportation, food security, housing.
Administrative efficiency
AI is attacking the administrative burden that consumes an estimated 34% of US healthcare spending (JAMA, 2024):
- Ambient clinical documentation. Tools like Nuance DAX Copilot and Abridge listen to patient encounters and generate clinical notes in real time, reducing physician documentation burden by 50% or more.
- Patient scheduling. AI optimizes scheduling to reduce no-shows, balance provider workloads, and match patients with the right specialist.
- Chatbot triage. AI-powered triage tools handle routine patient inquiries, freeing clinical staff for higher-acuity interactions.
$360B
potential annual value of AI in US healthcare by 2028, driven by clinical efficiency and administrative automation
Source : McKinsey & Company, 2025
FDA regulation of AI in healthcare
The FDA regulates AI-enabled medical devices through its Software as a Medical Device (SaMD) framework. Key points:
Classification matters. AI tools are classified as Class I, II, or III based on risk level. Most AI diagnostic tools are Class II, requiring a 510(k) or De Novo pathway. The FDA has authorized over 1,000 devices through these pathways.
Predetermined change control plans. In 2023, the FDA finalized guidance allowing AI device manufacturers to submit predetermined change control plans — documenting in advance how their algorithms will be updated, so that routine updates do not require new submissions. This was a landmark shift enabling continuous-learning AI.
Transparency requirements. The FDA expects labeling to include information about the AI model’s training data, known limitations, performance characteristics, and intended use population. The 2024 draft guidance on transparency in AI-enabled devices strengthened these requirements.
Clinical decision support exemption. Not all clinical AI is regulated as a medical device. The 21st Century Cures Act exempts certain CDS tools that are intended to support (not replace) clinician decision-making and that make their logic transparent to the user.
The FDA exemption for clinical decision support tools has limits. If your AI tool is intended to diagnose, treat, or prevent disease and does not allow the clinician to independently review the basis for the recommendation, it is likely a regulated medical device. When in doubt, consult regulatory counsel.
HIPAA considerations for AI
HIPAA compliance is non-negotiable for any AI system that touches protected health information (PHI). Key requirements:
Business Associate Agreements (BAAs). Any AI vendor processing PHI must sign a BAA. This includes cloud AI platforms (AWS, Azure, Google Cloud), AI SaaS tools, and any third-party processing PHI on your behalf. Free-tier AI tools like ChatGPT do not offer BAAs — and employees using them with PHI are creating HIPAA violations.
Minimum necessary standard. Only the minimum PHI needed for the AI function should be provided. De-identification under HIPAA Safe Harbor or Expert Determination methods should be used wherever possible.
Right of access. Patients have the right to access their health information, including information generated by AI systems. Your organization must be able to provide this.
Breach notification. If an AI system is compromised or PHI is improperly disclosed through AI (including through shadow AI), HIPAA breach notification requirements apply — 60-day notification to affected individuals and HHS.
Security rule. AI systems processing PHI must meet HIPAA Security Rule requirements for administrative, physical, and technical safeguards.
The HHS Office for Civil Rights (OCR) issued guidance in 2024 clarifying that HIPAA’s non-discrimination requirements apply to AI systems used in healthcare operations. AI tools that use race, ethnicity, or other protected characteristics in ways that result in discriminatory treatment may violate HIPAA and Section 1557 of the ACA.
The workforce training gap
Technology is not the bottleneck. People are. The AI skills gap in healthcare is acute:
- Physicians need training on interpreting AI outputs, understanding model limitations, communicating AI-assisted decisions to patients, and maintaining appropriate skepticism.
- Nurses and clinical staff need training on AI-powered monitoring systems, ambient documentation tools, and when to override AI recommendations.
- Revenue cycle teams need training on AI coding tools, understanding when to accept AI suggestions and when to override them, and maintaining compliance with billing regulations.
- IT and security teams need training on AI-specific cybersecurity risks, including adversarial attacks, data poisoning, and model manipulation.
- Compliance officers need training on AI regulatory requirements, including FDA, HIPAA, and NIST AI RMF frameworks.
Building an AI governance program for healthcare
Healthcare organizations should build on the AI governance framework principles with sector-specific adaptations:
- AI inventory. Catalogue every AI system — clinical, administrative, vendor-embedded, and employee-initiated.
- Risk classification. Classify each system based on patient safety impact, regulatory status, and data sensitivity.
- AI policy. Develop healthcare-specific AI acceptable use policies covering clinical use, PHI handling, and FDA compliance.
- Training program. Implement role-based AI training for all staff — from frontline clinicians to back-office teams.
- Monitoring. Continuous monitoring of AI system performance, with clinical safety triggers.
Train your healthcare workforce with Brain
Brain delivers AI training designed for the complexity of healthcare. Practical modules covering AI fundamentals, HIPAA-compliant AI use, clinical AI limitations, responsible deployment, and generative AI in healthcare workflows. Role-specific content for clinicians, administrators, IT, and compliance teams. Tracked, assessed, and audit-ready.
Explore our plans to get started.
Related articles
AI Healthcare Administration: Cut Admin Waste by 34%
Free clinicians from paperwork with AI for scheduling, billing, clinical documentation, and compliance. A guide for healthcare leaders.
AI in US Banking: Fraud, Credit & Regulatory Guide (2026)
Navigate AI in US banking with OCC, FDIC, and Fed guidance. Covers fraud detection, credit scoring, fair lending, and model risk management.
AI for Construction: 5 High-Impact Uses in 2026
Cut costs and improve safety with AI in construction. Covers project planning, safety monitoring, quality control, cost estimation, and BIM integration.