Resources/HIPAA Risk Analysis Checklist
Free · Ungated · 18-min read

The HIPAA Risk Analysis Checklist.
Twenty-five questions to know if yours would actually hold up.

For practices, clinics, and groups doing the §164.308(a)(1)(ii)(A) Security Rule risk analysis OCR cites in roughly 80% of enforcement actions. The questions below are the difference between "we have a checklist someone Googled" and "we have a defensible analysis." If you can answer most of them with evidence, you're closer than most. If you can't, you're where most healthcare organizations are right now.

ScopeHIPAA Security Rule
Sections5
Items25
Read time~18 min
Before you start

Three things to know.

  1. This is the Security Rule risk analysis. Specifically, the §164.308(a)(1)(ii)(A) requirement that OCR cites in nearly every enforcement action. The Privacy Rule has its own obligations; this checklist isn't trying to cover those. If you're new to HIPAA: this is the most consequential document you'll write.
  2. A real risk analysis is not a checklist. A risk analysis identifies threats, evaluates likelihood and impact, and drives a Risk Management Plan. A checklist asks yes/no questions about controls. This page is the readiness check; the actual analysis is the work it points toward.
  3. "What good looks like" sits under each item. Italicized, one line. If you can answer the prompt with documented evidence, you can check the box. If you have to qualify the answer, that's a remediation item, not a passing grade.
A
5 items

Scope & inventory.

Where ePHI lives, who touches it, and which AI tools showed up after your last analysis was written.

A1

Confirmed your covered entity vs. business associate status?

Some organizations are both depending on the relationship — a clinic is a Covered Entity to its patients and a Business Associate to a hospital it serves. The status determines what the analysis must cover and who's accountable for which obligations. If you can't articulate yours in a sentence per relationship, that's the first gap.

Good: a one-page status memo per significant relationship, signed by leadership, attached to the analysis.

A2

Inventoried every system that creates, receives, maintains, or transmits ePHI?

Not your IT asset inventory — the ePHI flow inventory. EHR, billing system, patient communication tools, transcription, document management, email, mobile devices, cloud storage. Anything that touches ePHI in any state. Most analyses miss something here, and the missing thing is usually the most exposed thing.

Good: a single ePHI inventory with system, owner, custodian, and ePHI elements per row, dated within the last six months.

A3

Mapped data flows where ePHI enters, lives, moves, and leaves?

A list of systems is necessary; a flow is what shows the exposure. Patient intake → EHR → billing → clearinghouse → payer → audit log → archive. Each transition is a chance for ePHI to land somewhere it shouldn't. A diagram dated within the last year is the artifact most assessors look for first.

Good: a current data flow diagram per practice area, reviewed annually, signed by someone who can defend it.

A4

Accounted for AI tools — scribes, copilots, charting assistants?

If your team is using an AI scribe, an AI-powered intake tool, a charting copilot, or any AI vendor processing patient information, that vendor is in scope and the analysis needs to address it. Most analyses we see were written before these tools showed up. The answer isn't to ban them — it's to bring them into the analysis honestly.

Good: AI tools listed alongside other ePHI systems in the inventory, with BAA status, data handling specifics, and risk rating per tool.

A5

Cataloged Business Associates with the data they actually touch?

A folder of signed BAAs is not a BA inventory. The inventory is what each BA actually receives, processes, transmits, or stores — and how that maps to your ePHI flow. Most enforcement around BAs comes from a vendor doing more (or less) than the BAA describes. The analysis should call this out, vendor by vendor.

Good: a BA inventory with vendor, signed BAA on file, ePHI categories accessed, and last review date — current within twelve months.

B
5 items

Threats & vulnerabilities.

What could go wrong, where you're exposed, and whether you've looked at both honestly.

B1

Identified threats specific to your environment?

Not a copy-pasted NIST threat catalog. Your threats: a busy front desk, mobile devices in patient rooms, a billing service that emails statements, a referring physician's office that faxes records, vendors with VPN access. Specific threats produce specific controls. Generic threats produce generic controls.

Good: a threat list written for your practice, with examples drawn from your operations rather than from a template.

B2

Identified vulnerabilities with current evidence?

Recent vulnerability scans. Configuration audits. Phishing simulation results. Third-party penetration tests where they make sense. Self-assessed vulnerabilities without evidence are guesses. The analysis should name vulnerabilities with the artifact that surfaced each one — and the date it was surfaced.

Good: a vulnerability register with each item traced to a recent assessment artifact, dated, and assigned for remediation.

B3

Considered insider threats — both accidents and intentional?

Most healthcare breaches are insider events, and most insider events are mistakes — not malice. A staff member emails a chart to the wrong recipient. An employee snoops a celebrity record. A contractor copies a USB drive. Intentional misuse exists too, but accidents are the larger category. The analysis should reflect both.

Good: insider threat scenarios in the analysis with controls (training, monitoring, sanctions, audit logging) tied to each.

B4

Assessed environmental and operational threats?

Power outage. Fire. Flood. Ransomware. Vendor failure. Anything that disrupts ePHI access or integrity counts. Availability is one of HIPAA's three Security Rule pillars (alongside confidentiality and integrity), and it's the one most analyses underweight. A clinic that can't access charts during an outage is a clinic with a Security Rule problem.

Good: continuity-impacting threats explicitly addressed, with backups, alternate sites, and recovery time objectives documented.

B5

Reviewed recent healthcare breaches for relevant patterns?

The HHS Breach Portal is publicly readable. So are sector reports from HHS, OCR enforcement summaries, and breach digests from healthcare cybersecurity groups. The patterns repeat — phishing leading to ransomware, BA failures, misdirected emails, AI vendors with unclear data handling. Reading what happened to peers is part of the work.

Good: an annual review note in the analysis citing recent breach trends and how your controls address each pattern relevant to your size and segment.

C
5 items

Likelihood & impact.

The math that turns "things could go wrong" into "here's what we should fix first."

C1

Evaluated likelihood for each threat-vulnerability pair?

NIST SP 800-30 expects pairs, not standalone items. "Phishing email" alone isn't a risk; "phishing email + staff with no MFA on email" is. The likelihood rating belongs to the pair, not the threat. Most analyses we see skip this step and rate threats in isolation, which produces ratings that don't survive scrutiny.

Good: a register of threat-vulnerability pairs with likelihood ratings and the reasoning behind each rating.

C2

Evaluated impact across multiple dimensions?

Patient safety. Financial. Reputational. Regulatory. Operational continuity. A breach that exposes 5,000 records has a different impact profile than a ransomware event that locks the EHR for three days during a busy week. Impact ratings that reduce all consequences to a single number lose information the analysis is supposed to capture.

Good: impact rated across distinct categories, with the highest category driving the overall rating but the others visible in the documentation.

C3

Used a defensible methodology?

NIST SP 800-30 is the most-cited methodology in OCR enforcement. ISO 27005 is acceptable. A homegrown spreadsheet with no underlying methodology is a problem — it works until someone challenges it. The methodology doesn't have to be elaborate; it has to be documented and consistently applied.

Good: methodology section in the analysis naming the framework (NIST 800-30, ISO 27005, etc.), with rating scales, definitions, and examples.

C4

Documented rationale, not just ratings?

A risk rated 4-out-of-5 is a number. A risk rated 4-out-of-5 because the EHR is internet-accessible, two staff are without MFA, and the recent phishing simulation showed a 12% click rate is a defensible position. The reasoning is what survives the assessor's follow-up question.

Good: a paragraph of rationale per significant risk, citing specific evidence — not a category code.

C5

Validated the analysis with someone who didn't write it?

Peer review or external validation. The person who wrote the analysis is the worst person to find its blind spots. A second pair of eyes — internal IT, the practice manager, an outside reviewer — turns up the assumptions the author made unconsciously. Most analyses skip this and lose finding power as a result.

Good: a documented review with reviewer, date, and changes made (or not made, with reasoning) attached to the analysis.

D
5 items

Risk treatment.

The Risk Management Plan that turns analysis into action — and where most analyses go to die.

D1

Risk Management Plan that maps to specific risks?

The Security Rule expects the analysis to drive a Risk Management Plan. The plan is the document that connects each significant risk to the safeguards that address it. Without the plan, the analysis is a report; with it, the analysis is the foundation of the program. OCR enforcement frequently cites missing or generic Risk Management Plans.

Good: a Risk Management Plan that lists each significant risk, the chosen response, and the responsible owner — referenced from the analysis itself.

D2

Safeguards clearly tied to the risks they address?

"We have MFA" is a control. "We have MFA on email and EHR access to reduce the likelihood of credential-based phishing leading to PHI exposure" is a control tied to a risk. The traceability is what makes the analysis defensible. An assessor reading the plan should be able to ask "why this control?" and find the answer in the analysis.

Good: each major safeguard cross-referenced to the specific risks it mitigates, with the mapping visible in the plan.

D3

For "addressable" specifications, documented decisions?

The Security Rule has "required" and "addressable" implementation specifications. "Addressable" doesn't mean "optional" — it means you must implement it, document an equivalent measure, or document why the safeguard isn't reasonable for your environment. Most enforcement around addressable items comes from organizations that treated them as optional. Encryption is the most common example.

Good: a decision log for each addressable specification — what you implemented, what you replaced it with, or why you didn't, with reasoning that holds up.

D4

Remediation owners and timelines assigned?

Every remediation item should have a name attached and a date by which it'll be addressed. "IT will fix this" is not an owner. Without owners and timelines, items sit on the plan for years — which becomes its own finding. The owner doesn't have to be the person doing the work; they have to be accountable for it being done.

Good: a remediation tracker with owner, target date, status, and visible movement over time — current within the last 90 days.

D5

Evidence the plan is actually being executed?

A pristine plan with no progress tells a worse story than a plan with documented setbacks and adjustments. The evidence that the program is operating includes change records, training completions, security committee minutes, audit findings closed, and the small steady stream of artifacts that show the work is happening. Absence of evidence is evidence of absence.

Good: a quarterly status review with artifacts — meeting notes, completion records, remediation closures — attached and dated.

E
5 items

Documentation & refresh.

Whether the analysis exists as a real document — and whether anyone could find it under audit pressure.

E1

A real document, not a checklist?

A risk analysis is a written document. Twenty pages or a hundred — the length depends on the organization, but the form is prose plus structured registers, not a yes/no spreadsheet. An assessor who asks "may I see your risk analysis" expects to receive a document. A spreadsheet alone usually doesn't satisfy them.

Good: a single risk analysis document with executive summary, scope, methodology, findings, ratings, and recommendations — readable by leadership without translation.

E2

Dated, signed, and current within the last year?

An undated risk analysis is impossible to defend. A risk analysis from three years ago is barely defensible. The Security Rule expects periodic review and updates "as needed" — and OCR's interpretation of "as needed" is "at least annually plus after material changes." Sign and date matter for the same reason they matter on any compliance artifact.

Good: a current analysis with a clear date, signed by leadership, and a documented annual review — even if the review concluded "no changes."

E3

Re-performed after material changes?

New EHR. New AI vendor. M&A activity. A breach. New service line. A move to a new building. Each of these is a "material change" that triggers a refresh — not a full rewrite, but a focused update to the relevant scope and risks. Most analyses we see are stale because nobody updated them when the practice changed.

Good: a "change log" attached to the analysis listing material changes since the last full review and how each was reflected.

E4

Archived where you can find it under deadline pressure?

When OCR sends a request for documentation, the deadline is short. The analysis sitting on a shared drive nobody can reach without IT support is not a successful archive. The Privacy Officer or Security Officer should be able to retrieve the current analysis and the previous two versions in under five minutes, on their own.

Good: the current analysis and a versioned history accessible to the responsible officer without dependency on IT or external counsel.

E5

Part of your retention plan?

HIPAA requires six years of retention for compliance documentation, including risk analyses. Many states extend it. The retention obligation runs from when the document was created or last in effect — meaning a 2018 analysis is still covered by retention through at least 2024, even if it's been superseded. Your retention policy should explicitly call out risk analyses with the right clock.

Good: risk analyses listed by name in the document retention policy, with the retention clock explicitly tied to creation/effect rather than current status.

Now you've read all 25

If you answered "yes, with documentation" to most of these, your analysis is ahead of the field. If most of your answers were "we'd need to check," that's not a failing grade — it's where a lot of healthcare organizations are right now.

A real HIPAA risk analysis is a multi-week project, not a weekend exercise. The firms that finish well are the ones that scope honestly — including the AI tools that arrived since the last analysis was written — and treat the Risk Management Plan as the actual deliverable, not a postscript. The next section has three ways to take this further.