The CMMC Readiness Checklist.
Twenty-five questions to read where you actually stand.
For defense contractors and subs preparing for CMMC Level 2 assessment — whether you're at zero, you've been working on this for a year, or you have an SSP from a vendor who doesn't return calls anymore. The questions below are the ones we'd ask you on the readiness call. The answers tell you whether the assessment is six months away or eighteen.
Three things to know.
- This is L2 prep, not the assessment itself. Most defense subs handling CUI are working toward CMMC Level 2 — that's the scope of this list. Level 1 (FCI only) is a smaller subset. Level 3 is rare and additive. If you don't know which you need, the boundary scoping section starts to answer that.
- Boundary scoping is the whole game. Most of the cost and pain in CMMC comes from a boundary that wasn't drawn carefully. The list starts there for a reason — getting it right makes everything downstream cheaper.
- "What good looks like" sits under each item. Italicized, one line. If you can answer the prompt without hedging, you can check the box. If you have to qualify the answer, that's a POA&M item.
Boundary scoping.
Where CUI lives, where it doesn't, and whether the line between them is intentional.
Have you mapped where CUI lives?
Every system, every share, every email folder where Controlled Unclassified Information is stored, processed, or transmitted. The boundary is the line between "in scope" and "out of scope" — and most firms get it wrong by drawing it too big.
Good: a list of every CUI location, by system and by team, that anyone on the leadership team can read.
Have you reduced your boundary on purpose?
Most firms inherit a boundary that includes everything connected to the network. Real CMMC work begins with intentional reduction — moving CUI to a smaller, more controllable subset of systems before you try to lock them down. A smaller boundary is cheaper to defend, audit, and maintain.
Good: a "before and after" diagram showing which systems used to handle CUI and which still do, after deliberate consolidation.
Are subcontractors handling CUI?
If your subs touch CUI, your DFARS clauses flow down to them — and so does the assessment expectation. Your CMMC certification is only valid for the boundary you control. A sub who isn't aligned can become a finding in your assessment.
Good: a list of subs touching CUI, the contractual flow-down language in place, and a status read on each one's CMMC posture.
Have you considered GCC High?
Microsoft GCC High is the right answer for some firms — usually those with substantial CUI volume or specific contractual requirements. It's the wrong answer for many others, where it's an expensive solution to a problem that could be solved more cheaply. Having the conversation matters either way.
Good: a documented decision either way, with the reasoning visible to anyone who asks "why didn't we just go GCC High?"
Is your boundary documented?
An auditor's first question is "show me the diagram." Most diagrams we see are five years old, hand-drawn on a napkin, or both. The diagram doesn't have to be beautiful — it has to be current, accurate, and aligned with the SSP.
Good: a network and data-flow diagram dated within the last six months, signed off by an authority who can defend it.
Identity & access.
Who can reach CUI, how strongly we authenticate them, and how quickly access changes when people do.
MFA on everything that touches CUI?
Email. File shares. VPN. Cloud apps. Admin accounts. Every place a credential could be used to reach CUI. CMMC L2 expects this; insurers expect it; auditors expect it. Hardware tokens or app-based factors are preferred — SMS is the floor and not always acceptable.
Good: an inventory of every CUI-adjacent system with MFA status visible per user, no exceptions.
Strong authentication for privileged accounts?
Domain admins, service accounts, anything with read or modify rights to CUI. Privileged access is where breaches escalate. The bar is higher: hardware tokens, just-in-time access elevation, time-limited admin sessions, separate accounts from daily use.
Good: privileged accounts tracked separately from user accounts, with elevated controls and a documented approval workflow.
Account management process?
When someone leaves, accounts get disabled within 24 hours? When roles change, access is reviewed and right-sized? When a contractor's engagement ends, do you actually remove them? Most CMMC findings around access stem from drift — accounts that should have been disabled but weren't.
Good: a documented offboarding process that closes accounts on day one, with HR and IT in the same loop.
Quarterly access reviews?
Every quarter, every CUI-touching system, a manager who actually understands what their reports do confirms that the access list is right. Not a rubber-stamp click-through — a real review with the questions asked aloud.
Good: signed quarterly review records on file, with documented changes when the review surfaces them.
Separation of duties for sensitive operations?
One person can't both make a change and approve it. The DBA can't be the only one with backup access. The system admin can't also be the audit log reader. CMMC expects you to have thought about this — and where two roles can't be split because of staffing, you'll need a compensating control.
Good: a role/responsibility matrix showing critical operations and who's authorized for each.
Documentation.
The papers that say what you do — and whether they match what you actually do.
System Security Plan (SSP) — exists, accurate, current?
The SSP is the document an assessor reads first. It describes your boundary, your controls, your responsibilities. Most we see are templated, generic, or describe the wrong company. An assessor can tell within ten minutes whether the SSP describes a real system or someone else's.
Good: an SSP written about your specific environment, updated within the last six months, signed by someone who can defend it.
Plan of Action & Milestones (POA&M) — does it actually drive remediation?
A POA&M without progress is a list of things you've decided not to fix. Each item should have an owner, a target date, and visible movement over time. Stale POA&Ms are an assessment red flag — they signal that the program isn't operating.
Good: a POA&M with at least monthly updates, owners assigned, and items closing as evidence accumulates.
Incident Response Plan — written, tested, shared?
If something happens, the plan should tell people what to do. It should be reviewed annually, tested at least once a year (a tabletop exercise counts), and known to the people who'd actually execute it. Most plans we see exist on a network share nobody can find at 3 AM Sunday.
Good: a current IR plan, evidence of an annual tabletop, and named contacts who know they're on the call list.
Configuration management baselines?
A "golden image" or hardened baseline for each kind of CUI-touching system. Change control that requires approval before changes ship. Deviations documented when they happen. Without a baseline, "secure configuration" is whatever each admin remembered to do that day.
Good: documented baselines per system class, change tickets for deviations, and a periodic review that catches drift.
Cybersecurity policies match what your team actually does?
If the policy says "all changes require ticketed approval" and your team makes changes without tickets, the policy is the problem — either the policy or the practice has to change. An assessor who finds the gap will find it everywhere.
Good: a policy review where the policies were rewritten to match operational reality, then operational reality was raised to match the new policy where needed.
Technical controls.
Highlights from NIST 800-171 that auditors zero in on. The full set is 110 controls; these are the five we see fail most often.
Encryption at rest and in transit?
Everywhere CUI travels — laptops, servers, cloud storage, backups, email gateways, VPN tunnels. CMMC L2 isn't ambiguous on this point: encryption is required, and FIPS-validated cryptography is the bar. "It supports encryption" is not the same as "it's enabled."
Good: an inventory of CUI-handling systems with encryption status confirmed and FIPS-validated implementations called out.
Audit logging — turned on, retained, monitored?
Logs of authentication events, access to CUI, configuration changes. Retained for 90+ days minimum. Reviewed regularly — not stored as evidence "in case." A SIEM or log aggregation tool helps; "the log files are on the server" is not enough.
Good: a centralized log store with the right events flowing in, retention proven, and a documented review cadence.
Vulnerability scanning — monthly minimum, results driving action?
An authenticated scan of CUI-touching systems, run monthly at minimum. Findings triaged, prioritized, fed into the POA&M, and closed with documented evidence. A scan that runs but isn't acted on is a scan that's making the assessor's case for them.
Good: monthly scans with trend data showing findings closing over time, integrated with the POA&M.
Endpoint protection / EDR on every endpoint?
Modern EDR or XDR on every system handling CUI. Not just AV — behavioral detection, response capability, central management. The endpoint is the most attacked surface; the controls there matter the most. CMMC L2 expects active monitoring, not just passive AV.
Good: a roster of every CUI-touching endpoint with EDR coverage confirmed, alerts going to a real human, and exclusions documented.
Security awareness training — annual minimum, with proof?
Annual training for everyone with a CUI-relevant role, phishing simulations a few times a year, completion tracked centrally. The "we do training" claim without records doesn't survive an assessor's first follow-up question. Records are the artifact, not the training itself.
Good: training records retrievable per person, current within 12 months, with phishing simulation results trended over time.
Audit prep.
The work that turns "we have controls" into "we can defend the controls in front of an assessor."
Real gap assessment in the last 12 months?
Not a self-rated checklist — a real assessment by someone who's done this before, mapping your environment to the 110 NIST 800-171 controls and producing a defensible read on each. Most firms learn more from the gap assessment than from the C3PAO assessment that follows.
Good: a written gap report dated within the last year, with prioritized findings feeding the POA&M.
Mock interviews with system owners?
An assessor will interview the people running your systems. If the IT lead can't explain how the encryption is configured, or the HR manager can't articulate the offboarding process, the documented controls don't matter. Mock interviews surface those gaps before the real one does.
Good: a recent set of mock interviews with documented findings — what each system owner could and couldn't speak to — and remediation where the gaps were real.
Evidence artifacts collected and indexed?
Logs, screenshots, policies, training records, network diagrams, MFA reports, scan results — all in one place, organized to map back to the relevant controls. An assessor doesn't have time to play scavenger hunt; the firms that score well make the evidence easy to find.
Good: a single evidence repository organized by control family, with each artifact dated and pointing to the source system.
SPRS score current and submitted?
Your DFARS 252.204-7019 self-assessment score, submitted to SPRS, is a public number visible to your contracting officer. Stale or missing scores create friction independent of CMMC. The score should reflect your actual posture, refreshed at least annually.
Good: a SPRS score submitted in the last 12 months, with the underlying calculation defensible if anyone asks.
Legal counsel familiar with DFARS / CMMC on speed dial?
When something goes ambiguous — a sub's posture, a flow-down clause, an incident with reportable implications — you want counsel who's done this before. General corporate counsel without DoD experience is a fine resource for most things, and not the right resource for this. Identify them before you need them.
Good: a named outside firm or counsel who has handled DFARS or CMMC work, on retainer or with a known engagement path.
If you answered "yes, with evidence" to most of them, you're closer than most. If you answered "we'd need to check" to most, you're not behind — you're where most defense subs are right now.
CMMC is a journey measured in months, not weeks. The firms that finish well are the ones that start with a clear-eyed read on where they stand — and don't try to skip the boundary scoping conversation just because the technical controls feel more concrete. The next section has three ways to take this further.
Three ways to take this further.
You've gone through the list. Pick the path that matches where you actually are.
Walk through your answers with us.
Bring your responses to the readiness call. We'll tell you which gaps are urgent, which are POA&M material, and which are red herrings — and give you a real estimate for what closing them costs. No qualifying call before the qualifying call.
Schedule the call →Read the Defense Contractors page.
More context on how we work with defense subs — the services we offer, the pillars that make us a fit (or not), and the questions other firms have asked us most often. Worth a read before the call so you walk in with sharper questions.
See the page →Browse the rest of the library.
If your team also touches HIPAA, AI governance, or customer audit response work, the other checklists may be worth bookmarking. We're shipping new ones in the next few weeks.
See all resources →