The AI Governance Quick Start.
Twenty-five questions for businesses without a Chief AI Officer.
For leadership teams with a sense that AI is happening on the ground but no real read on what's being used, where company data is going, or what the policy should say. Practical first steps, written for businesses where AI governance is one item on a long list — not someone's full-time job. If you can answer most of these with evidence, your program is ahead of the field.
Three things to know.
- This is the operator's version of AI governance. Not the academic version. Not the "we hired McKinsey" version. The version for the leadership team running a real business that has noticed AI is happening on the ground and wants to do something sensible about it. If you're regulated (CMMC, HIPAA, SOC 2), this complements the regulatory work — it doesn't replace it.
- Most of the work is visibility. The hardest part of AI governance isn't writing the policy — it's knowing what's actually happening before you write it. The first section is the one most teams skip and most regret skipping. Doing it badly creates a policy that doesn't match reality, which is worse than no policy at all.
- "What good looks like" sits under each item. Italicized, one line. If you can answer the prompt with a real artifact or process, you can check the box. Aspirational answers don't count — and most of the value in this checklist is the small honesty of saying "no, we haven't actually done that yet."
Visibility.
Knowing what AI is actually being used in your business — including the AI that showed up without anyone telling you.
Inventoried the AI tools your team is actually using?
ChatGPT. Claude. Copilot. Gemini. The transcription tool the sales team started using. The image generator marketing's been using for ad creative. The coding assistant in the dev environment. Most companies underestimate this number by half. The first version of the inventory should be uncomfortable — that's how you know it's accurate.
Good: a current inventory listing every AI tool in use, who's using it, for what purpose, and whether it's company-paid or personal-account.
Identified which roles and teams are using AI for which tasks?
Sales using AI to draft emails to prospects is different from sales using AI to score and rank leads. Marketing using AI to brainstorm headlines is different from marketing using AI to generate finished customer-facing copy. The distinction matters because the risk profile and the appropriate guardrails are different. A use-case map per team beats a tool inventory alone.
Good: a use-case map showing department, task, AI tool, and whether the output is internal-only or customer-facing.
Tracked what data is going into AI tools?
When someone pastes a customer email into ChatGPT to "summarize this for me," that customer email leaves your environment. When someone uploads a contract to a vendor's AI to "extract the key terms," that contract leaves your environment. The visibility question isn't "what tools are we using" — it's "what data are we sending out." Most teams have never asked.
Good: a documented assessment of data categories flowing into each AI tool — customer data, employee data, financial, IP, regulated information.
Accounted for AI inside the vendor tools you already use?
Microsoft 365 has Copilot. Google Workspace has Gemini. Slack has AI summaries. Salesforce has Einstein. Zoom has AI Companion. Most of these were turned on by default in updates you may not have read. The AI in your existing stack is often more pervasive than the AI tools you specifically went out and bought.
Good: a review of every major SaaS vendor in your stack with their AI features cataloged, default settings reviewed, and admin-level controls understood.
Identified AI features in your customer-facing products?
A chatbot on the website. AI-powered search. Recommendation engines. Automated underwriting or pricing. Whatever's in front of customers — including AI features your developers added without flagging them as AI — has different obligations than internal use. State laws (Colorado, California) and emerging federal guidance treat customer-facing AI with extra scrutiny. Knowing what you ship to customers is the prerequisite to governing it.
Good: a customer-facing AI inventory listing each feature, what it decides or generates, and what the customer is told (or not told) about it.
Policy.
The written rules — clear enough that an employee on day one knows what's OK and what isn't.
A written AI Acceptable Use Policy that exists?
Not a Slack message. Not a verbal "we're working on it." A written document with a date, an owner, and a version number. Most companies skip this because it feels premature — but the absence of policy is itself a policy, and it's usually the wrong one. A first draft beats no draft. You can revise as the picture sharpens.
Good: a current AI AUP, dated within the last 12 months, signed by an executive, accessible to every employee.
A clear approval process for new AI tools?
When a team wants to start using a new AI tool, what happens? Who approves it? What gets reviewed? The contract, the data flow, the security posture, the cost? Without a process, "approval" defaults to whoever has a credit card and feels good about the trial — which is how most shadow AI shows up in the inventory you just built.
Good: a documented intake process with named approver(s), a review checklist, and a decision log that's reviewed at least quarterly.
Banned use cases named explicitly?
A policy that says "use AI thoughtfully" gives no employee the answer they need. A policy that says "do not paste customer PII into a personal-account AI tool" or "do not use AI to draft customer-facing legal commitments without attorney review" tells someone what to do at 4 PM on a Wednesday. Specificity is what makes policy useful. Bans are clearer than guidelines.
Good: a list of explicit banned use cases — concrete enough that an employee can map their work to it without consulting a manager.
Onboarding and training that covers AI?
When a new employee starts, they get the AUP in their first week, with a real explanation — not a 47-page PDF buried in a benefits portal. Annual refreshers for everyone. Specific role-based training where the use cases warrant it (sales, customer service, anyone touching regulated data). Most policies fail because nobody trained anyone on them.
Good: AI use covered in new-hire onboarding, refreshed annually, with role-specific deep dives where the use cases warrant them.
Signed acknowledgments — not just emailed PDFs?
An employee who signed the AUP knows it exists. An employee who got an email about it might not. The acknowledgment is a small ritual that does a lot of work — it makes the policy official, it creates a record, and it forces a moment where someone actually read the document. Annually is fine. Once at hire and never again is not.
Good: signed acknowledgments on file for every current employee, current within the last 12 months, retrievable by HR within minutes.
Data handling.
What goes into AI, where it lands, what the vendor does with it, and whether you'd know if customer data left the building.
Defined what data is OK as input — and what isn't?
Customer PII. Employee records. Financial statements. Trade secrets. Confidential client communications. Health information. Each of these has different rules — some legal, some contractual, some just sensible. The policy needs to be specific about each category, not handwave at "sensitive data." Employees need to know whether they can paste a sales call transcript into ChatGPT, and the answer should be in writing.
Good: a data-classification table mapping data categories to AI tools they're approved (or banned) for, refreshed when new tools are added.
Reviewed your AI vendor contracts for what they actually say?
Does the vendor train on your prompts? Retain them? Share them with subprocessors? Use them for analytics? Most consumer AI tools have terms that allow more than your customers would be comfortable with. Most enterprise tiers have better terms — but only if someone read them, negotiated them, and confirmed the controls match the contract. The free tier of a tool used by sales is the weakest link.
Good: every AI vendor's data-use terms reviewed and summarized, with non-conforming tools either upgraded, replaced, or banned.
Understood retention and deletion at each AI vendor?
When an employee stops using the tool, what happens to everything they typed in? When the company stops paying, what happens to the workspace's data? Some vendors retain prompts for 30 days, some indefinitely, some let you delete on demand. Customer contracts may obligate you to delete on request, and "we used a vendor" doesn't get you out of that obligation.
Good: retention and deletion behavior documented per vendor, with deletion procedures tested at least once.
Addressed confidentiality with professional service providers?
Your law firm, accountant, financial advisor, and consultants are using AI too — likely with your data. Does your engagement letter address it? Have you asked what they're doing? "Privileged communication" doesn't automatically extend to a third-party AI vendor processing the email on its way to your attorney. Most firms haven't updated their engagement language; you may need to ask first.
Good: AI clauses added to engagement letters and contracts with professional service providers handling sensitive information.
Have any way of telling if data left without authorization?
If someone uploaded the customer database to ChatGPT yesterday, would you know? Most companies couldn't tell. DLP tools, browser-extension monitoring, network logs that flag uploads to known AI domains — there's a spectrum, and 100% prevention is unrealistic. But "could we know within a week?" should be a "yes" for any data category you care about. Without that, the policy is unenforceable.
Good: a basic egress visibility capability — DLP, monitoring, or even periodic audit — sized to the sensitivity of data in your environment.
Higher-risk uses.
The use cases that need extra controls — because the consequences of getting them wrong are bigger than a Slack message gone awry.
Identified AI making customer-facing decisions?
Pricing. Eligibility. Approval. Recommendations that materially shape what a customer sees, pays, or qualifies for. The legal landscape here is moving fast — Colorado's AI Act, EU AI Act, NYC bias audit law, FTC enforcement, sector-specific rules in financial services and insurance. The right answer isn't always "ban it"; it's "name it explicitly, control it deliberately, and don't let it drift into autonomy by accident."
Good: customer-facing AI decisions cataloged with documented human oversight, fairness review, and customer-disclosure approach.
Identified AI making employee-facing decisions?
Resume screening. Interview scoring. Performance ratings. Promotion recommendations. Scheduling and workforce optimization. Everything that affects an employee's career or paycheck deserves more scrutiny than how AI helps them write status updates. NYC, Illinois, and Maryland have specific rules. EEOC guidance is clear. The downside on getting this wrong is class-action territory.
Good: AI use in employment decisions documented with bias review, human-final-decision policy, and explicit notification to candidates and employees where required.
Defined what counts as an autonomous agent — and what controls apply?
An AI that drafts an email for a human to send is one thing. An AI that sends emails on its own, processes invoices automatically, books appointments without confirmation, or moves money between accounts is a different category — and the failure modes are categorically different. Most companies haven't drawn this line yet, which means the line is being drawn for them by whatever vendor's roadmap ships first.
Good: a policy distinguishing "AI-assisted" (human in the loop) from "AI-autonomous" (no human in the loop) with controls and approvals specific to each.
Set rules for AI-generated content sent externally?
Marketing emails. Customer service responses. Sales proposals. Press releases. Contracts and legal language. AI is increasingly producing what your business says to the outside world — and "AI generated this" is no defense if the content is wrong, defamatory, infringing, or just embarrassing. The question is who reviews and signs off, and at what threshold.
Good: a tiered review policy — what AI-generated content can ship as-is, what requires human review, what requires legal review — applied consistently.
Addressed AI in safety- or compliance-critical work?
If you're in healthcare, finance, defense, manufacturing, or any regulated industry, certain work is held to a higher standard regardless of how it's done. AI doesn't reduce that standard — and "the AI made the recommendation" doesn't transfer the obligation. If a human professional is required to make the final judgment under your industry's rules, the AI policy needs to say that clearly enough that nobody could reasonably misunderstand.
Good: regulated work explicitly carved out of "AI may assist" categories where applicable, with documented human-final-decision requirements and audit trails.
Operating cadence.
What turns a one-time effort into a real program — and keeps it from going stale six months in.
Named a single accountable owner?
Not a committee. Not "IT and Legal share it." One name, with budget authority and direct access to the leadership team. The owner can be a CTO, COO, GC, CISO — depends on the business. They don't have to do all the work, but they have to own that the work happens. Programs without an owner default to nobody, which produces predictable results.
Good: a named executive owner of AI governance, with the role documented, the title visible to employees, and quarterly leadership-team updates expected.
Quarterly reviews scheduled — and actually held?
AI moves faster than most enterprise change cycles. Vendors ship new features monthly. Use cases evolve. Regulations are still settling. A program reviewed annually is a program reviewed too rarely. Quarterly reviews — even short ones — keep the inventory current and the policy aligned with reality. The first quarterly review usually surfaces things that should have been caught earlier.
Good: a recurring quarterly review on the calendar, with attendees, an agenda, and notes from at least the last two reviews on file.
Incident response plan covers AI-specific events?
An employee leaks customer data into ChatGPT. A vendor's AI generates output that turns out to be defamatory or infringing. A customer-facing recommendation engine starts making demonstrably biased calls. A regulator opens an inquiry about your AI use. Each of these is an incident with its own response pattern, and the team that thinks through it on a Tuesday afternoon does much better than the team thinking through it at 11 PM Sunday.
Good: AI-specific scenarios in the incident response plan, with response playbooks for the top three to five most-likely events.
Leadership and (where applicable) board are getting visibility?
For private companies, this means quarterly leadership-team agenda time on AI use, costs, risks, and incidents. For boards, it means inclusion in standing risk and audit committee reporting. AI is now a routine governance topic in mature companies — not a one-time deep dive. The frequency calibrates to the company's risk profile, but the answer should never be "we've never reported on it."
Good: a recurring section in leadership or board reports covering AI program status, current risks, and material incidents — with at least one cycle of evidence on file.
A documented plan for how the program evolves?
Where will you be a year from now? Two years? More automation, more human review, more vendors, fewer? The honest answer is "we don't know exactly" — but you should have a working hypothesis, a budget, and a sense of the next few moves. The companies that handle AI governance well treat it as a multi-year practice they're maturing, not a one-time policy artifact they're producing.
Good: a written multi-quarter roadmap with named milestones, a budget envelope, and an annual leadership refresh.
If you answered "yes, with documentation" to most of these, your governance program is ahead of where most businesses are right now. If most of your answers were "we'd need to check," that isn't a failing grade — it's a fair starting position, and the work to close the gaps is mostly straightforward once you start.
AI governance is a leadership exercise more than a technical one. The hardest parts — the inventory, the policy, the operating cadence — don't require an engineering investment. They require an honest read on what's happening, a willingness to write rules that match reality, and a small recurring commitment to keep the work fresh. The next section has three ways to take it further.
Three ways to take this further.
You've gone through the list. Pick the path that matches where you actually are.
Walk through your answers with us.
Bring whichever answers you got stuck on, the AI tools you're not sure about, and the policy draft you've been putting off. We'll tell you where the real gaps are, what's worth tackling first, and roughly what a serious AI governance program costs to stand up. No qualifying call before the qualifying call.
Schedule the call →Read about the AI Exposure Report.
If the visibility section was harder than the policy section — and it is for most companies — the Exposure Report is the paid version of that work. We come in for a week, do the inventory, the data-flow map, the vendor review, and produce the artifact you can hand to leadership or the board. Worth a look if the self-serve version isn't enough.
See deliverables →Browse the rest of the library.
If your team also touches CMMC, HIPAA, or customer audit response work, the other checklists may be worth bookmarking. We're shipping new ones in the next few weeks.
See all resources →