In February 2024, the HHS Office for Civil Rights (OCR) settled with Montefiore Medical Center for $4.75 million after a data breach involving unauthorized access to patient records. In December 2023, OCR reached a $480,000 settlement with a healthcare provider over a cloud storage misconfiguration. These are not hypothetical scenarios. They are the cost of getting data governance wrong.
Now add AI to the equation. Healthcare organizations are deploying AI tools that ingest, process, and generate protected health information at scale — from ambient clinical documentation to predictive analytics to AI-powered patient portals. Each one is a potential HIPAA liability if deployed without the right safeguards.
The question is not whether to use AI in healthcare. It is how to use it without violating federal law.
À retenir
- Every AI tool processing PHI requires a signed Business Associate Agreement — no exceptions, including cloud AI platforms
- OCR enforced over $4.1 million in HIPAA penalties in 2024 alone, with AI-related complaints rising sharply
- Shadow AI is the biggest HIPAA risk in healthcare — staff using consumer AI tools like ChatGPT with patient data
- The HIPAA Security Rule applies fully to AI systems, requiring administrative, physical, and technical safeguards
- Staff training is the most effective and least expensive risk mitigation measure
Why AI creates new HIPAA risks
HIPAA was enacted in 1996 — decades before generative AI. The law’s core principles still apply, but AI introduces risks the original framers never anticipated.
Data flows are more complex. Traditional healthcare IT has clear boundaries: EHR stays on-premises or in a contracted cloud. AI breaks those boundaries. A physician using an ambient documentation tool sends voice data to a cloud AI model for transcription and summarization. A revenue cycle team feeds claim narratives into an AI coding assistant. Each data flow is a potential PHI exposure point.
Third-party processing is expanding. Every AI vendor that processes PHI is a business associate under HIPAA. The number of business associates in a typical health system has grown 340% since 2020 (Ponemon Institute, 2025). More vendors means more attack surface and more BAA management complexity.
Employee-initiated AI use is invisible. This is the shadow AI problem. A 2025 survey by the American Health Information Management Association found that 42% of healthcare workers had used consumer AI tools — ChatGPT, Gemini, Claude — with some form of patient information. Most did not realize this violated HIPAA.
42%
of healthcare workers have used consumer AI tools with patient information, most without realizing the HIPAA implications
Source : AHIMA Workforce Survey, 2025
HIPAA requirements for AI systems
The HIPAA Privacy Rule, Security Rule, and Breach Notification Rule all apply to AI. Here is how each maps to AI deployment.
Business Associate Agreements
Any AI vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity must sign a BAA. This includes:
- Cloud AI platforms (AWS Bedrock, Azure OpenAI, Google Cloud Vertex AI) — all offer HIPAA-eligible services with BAAs, but you must configure them correctly and ensure the BAA covers AI-specific processing
- AI SaaS tools (ambient documentation, coding assistants, patient engagement platforms) — verify the vendor will sign a BAA and confirm how PHI is handled during model training
- Consumer AI tools (ChatGPT free tier, Gemini free tier, Perplexity) — these do not offer BAAs, period. Any PHI entered into these tools is a HIPAA violation
OpenAI’s ChatGPT Team and Enterprise plans offer BAAs. The free and Plus plans do not. The same applies to most AI vendors — the consumer version and the enterprise version have different compliance postures. Verify the specific plan your organization uses, not just the vendor name.
The minimum necessary standard
HIPAA requires that only the minimum necessary PHI be used for any given purpose. For AI, this means:
- De-identify data before feeding it to AI models whenever possible. HIPAA provides two methods: Safe Harbor (removing 18 identifiers) and Expert Determination (statistical verification that re-identification risk is very small).
- Limit prompt content. Train staff to strip patient identifiers before using AI tools — even HIPAA-compliant ones. Ask: does the AI need the patient’s name, date of birth, or MRN to perform this task?
- Audit AI inputs. Log what data is sent to AI systems and review regularly. This is both a Security Rule requirement and a practical safeguard.
Security Rule requirements
AI systems processing PHI must comply with the HIPAA Security Rule’s three safeguard categories:
- Administrative safeguards. Designate responsibility for AI security. Conduct risk assessments specific to AI systems. Develop AI policies covering acceptable use, incident response, and vendor management.
- Physical safeguards. Control access to systems hosting AI models and PHI data stores. This extends to cloud infrastructure — ensure your cloud provider’s physical security meets HIPAA requirements.
- Technical safeguards. Encrypt PHI in transit and at rest. Implement access controls. Maintain audit logs. For AI specifically: monitor model inputs and outputs, implement data loss prevention on AI interfaces, and ensure AI-generated content is attributable and auditable.
$4.75M
settlement paid by Montefiore Medical Center to OCR in 2024 for a data breach involving unauthorized access to patient records
Source : HHS Office for Civil Rights, 2024
How to build a HIPAA-compliant AI program
Healthcare leaders do not need to choose between AI adoption and HIPAA compliance. They need a structured approach that enables both.
Step 1: Inventory every AI tool
You cannot secure what you cannot see. Conduct a full AI inventory across your organization — clinical, administrative, IT, and research. Include vendor-embedded AI (your EHR likely has AI features you did not explicitly procure), employee-initiated tools, and pilot projects.
Step 2: Classify risk
Not all AI tools carry equal HIPAA risk. A scheduling optimization tool that uses no PHI is different from an ambient documentation tool that records patient encounters. Classify each tool by:
- Whether it processes PHI (and what types)
- Whether it involves a third-party vendor (and whether a BAA is in place)
- Whether it generates clinical content that becomes part of the medical record
Step 3: Establish governance
Build on AI governance framework principles with healthcare-specific controls:
- AI procurement must include HIPAA compliance review before any contract is signed
- Clinical AI must be reviewed by a multidisciplinary committee (clinical, IT, legal, compliance)
- All AI tools with PHI access must have documented BAAs, risk assessments, and incident response plans
Step 4: Train every role
This is where most organizations fail. Technology controls are necessary but insufficient. Your staff — physicians, nurses, coders, administrators — make dozens of decisions daily about what data to put into AI tools.
Role-based AI training is the most effective HIPAA risk mitigation for AI:
- Clinicians need to understand which AI tools are approved, how to use them without exposing unnecessary PHI, and how to document AI-assisted decisions
- Revenue cycle staff need training on AI coding tools and the compliance boundaries around automated billing
- IT teams need AI-specific security training covering model vulnerabilities, data pipeline security, and incident detection
- All staff need baseline training on the risks of shadow AI and the organization’s AI acceptable use policy
OCR has indicated that workforce training is a key factor in enforcement decisions. Organizations that can demonstrate comprehensive, documented AI training programs are better positioned in breach investigations and audits.
What OCR is watching
The HHS Office for Civil Rights has signaled increasing attention to AI-related HIPAA risks:
- AI and non-discrimination. OCR’s 2024 guidance clarified that AI systems used in healthcare operations must not discriminate based on race, disability, age, or other protected characteristics — extending HIPAA and Section 1557 of the ACA to AI decision-making.
- Cloud and AI vendor management. OCR has pursued enforcement actions against covered entities that failed to properly vet cloud and technology vendors, including missing or inadequate BAAs.
- Right of access and AI. Patients have the right to access their health information, including AI-generated content in their medical records. Organizations must be able to explain and provide this information.
Healthcare leaders should also monitor the NIST AI Risk Management Framework and emerging AI governance standards like ISO 42001, both of which are influencing how regulators evaluate AI compliance programs.
Train your healthcare team with Brain
Brain provides AI training built for regulated industries. Our healthcare modules cover HIPAA-compliant AI use, generative AI in clinical workflows, shadow AI risks, and AI governance fundamentals — all mapped to the roles in your organization. From frontline clinicians to compliance officers, every employee gets practical, assessed training that you can document for audits.
Explore our plans to get started.
Related articles
AI Compliance: EU AI Act + UK GDPR Guide (2026)
Navigate AI compliance — EU AI Act obligations, UK GDPR requirements, risk frameworks and practical steps for enterprise teams.
AI in US Healthcare: FDA, HIPAA & Use Cases (2026)
Navigate AI in US healthcare with confidence. Covers clinical decision support, revenue cycle, patient engagement, FDA rules, and HIPAA compliance.
AI Data Governance: 6 Pillars for Trustworthy AI (2026)
Build trustworthy AI with proper data quality, lineage, privacy, and consent frameworks. Includes EU AI Act data requirements and compliance steps.