AI Governance, Vendor Evaluation and Acceptable-Use Policy
Every manager in 2026 needs three artifacts: a one-page acceptable-use policy for their team, a vendor evaluation framework for new AI tools, and an honest position on what you escalate to IT, Legal, or Security. This lesson gives you templates for all three.
Governance sounds heavy. At the team level, it does not have to be. A page of clear "do this, don't do that" guidance — written by you, agreed by your team, reviewed quarterly — does 90% of the work.
What You'll Learn
- The shape of a team-level acceptable-use policy
- A 10-criterion vendor evaluation framework for new AI tools
- The escalation tree: what you can handle vs. what goes to IT/Legal/Security
- A change management framework for rolling out new AI tools to your team
- How to handle the most common governance failure modes
- A first-month governance checklist
Why You Need a Team-Level Policy
Companies have AI policies. They are usually long, often vague, and frequently ignored. Your team needs something different: a short, opinionated, specific policy that translates the company-wide rules into your team's actual workflows.
A team policy answers:
- Which tools we use on this team
- What kinds of data are okay to paste, and what are not
- What we use AI for, and what we keep human-only
- How we review AI-drafted work before it leaves the team
- Who to ask when in doubt
If your company already has an AI policy, your team policy must be a subset — never broader. Add specificity. Do not invent permissions.
The One-Page Team AI Policy Template
Here is the shape. Adapt the substance to your team.
TEAM AI ACCEPTABLE-USE POLICY
\[Team name\]
Last reviewed: \[date\] Owner: \[your name\]
1. APPROVED TOOLS
- General-purpose AI: \[ChatGPT Business / Claude Team / etc.\] via our company account
- Embedded AI: \[Microsoft 365 Copilot / Gemini for Workspace\]
- Specialized: \[Otter for meetings, etc.\]
- Do NOT use personal accounts of any of the above for work.
2. DATA THAT IS OKAY TO PASTE
- Internal documents not marked Confidential
- Your own raw notes, drafts, and bullets
- Public information (web articles, press releases)
- Anonymized or initials-only versions of internal data
3. DATA THAT IS NOT OKAY TO PASTE — NO EXCEPTIONS
- Customer names or PII (email, phone, address)
- Compensation data of any kind
- Health information
- Information labeled Confidential, Restricted, or NDA-covered
- Source code (unless using a company-approved code AI tool)
- Anything from another company's systems
4. WHAT WE USE AI FOR
- Drafting communications (always reviewed before sending)
- Summarizing meetings, docs, and threads
- Brainstorming and structured thinking
- Pattern-spotting across notes
- Process documentation
5. WHAT WE DO NOT USE AI FOR
- Final decisions on people matters (hiring, performance, comp)
- Customer escalations involving real humans in pain
- Anything regulated (legal advice, medical advice, financial advice)
- Generating fictional examples or numbers in real documents
6. THE REVIEW RULE
- Any AI-drafted content leaving the team is reviewed by a human first
- The reviewer is responsible for accuracy, not the AI
7. WHEN IN DOUBT
- Ask \[your name or your lead\] before pasting
- Escalate to \[IT / Security contact\] if the question touches data protection
- Escalate to \[Legal contact\] if the question touches regulation
8. THIS POLICY IS REVIEWED QUARTERLY
- Last review: \[date\]
- Next review: \[date\]
- Comments welcome anytime
Print it. Pin it. Distribute it. Re-read it at the top of every quarterly team review.
The Vendor Evaluation Framework
Every quarter, new AI tools show up at your door. Salespeople call. Reports say "we should use X." How do you decide?
Ten criteria, scored 1-5 each. Total out of 50. Anything below 30, do not pursue. Anything 35+, run a pilot.
The Ten Criteria
1. Fit to a real workflow we have. Does this tool solve a problem we already have, or is it a solution looking for a problem? Score 1 (no fit) to 5 (solves a workflow we run weekly).
2. Differentiation from tools we already pay for. Could ChatGPT, Claude, Copilot, or Gemini do this? Score 1 (no, our existing tools cover it) to 5 (genuinely different capability).
3. Security posture. SOC 2 Type II? Data-not-for-training guarantee? Encryption at rest and in transit? Sub-processor disclosure? Score 1 (concerning) to 5 (mature security with documentation).
4. Data residency and privacy. Where is data stored? Can it be region-locked if needed? GDPR-compliant if relevant? Score 1 (unclear) to 5 (clear, documented, region options).
5. Integration with our stack. Does it plug into the tools we already use, or create a new island? Score 1 (separate island) to 5 (native integration).
6. Admin and audit controls. SSO? Role-based access? Audit logs? Ability to revoke user access centrally? Score 1 (consumer-grade) to 5 (enterprise-grade).
7. Vendor stability. How old is the company? Funding posture? Reference customers similar to us? Score 1 (concerning) to 5 (well-established).
8. Total cost of ownership. Seat cost plus integration cost plus training time plus ongoing admin. Score 1 (expensive vs. value) to 5 (clear ROI at the asking price).
9. Lock-in risk. Can we export our data and prompts if we leave? Score 1 (significant lock-in) to 5 (clean export).
10. Quality of output on our use cases. Run a real test with real data (cleaned). Score 1 (worse than what we have) to 5 (clearly better).
The Vendor Pilot
If a tool scores 35+, run a 30-day pilot. Two reports use it on real work. Track:
- Time savings on the target workflow (vs. existing tool)
- Quality of output (sample blind, score on rubric)
- Failure modes encountered
- Security or governance concerns surfaced during use
- Total cost of the rollout if scaled to the full team
After 30 days, decide: scale, kill, or extend the pilot another 30 days. Do not let pilots drift indefinitely. They die from neglect.
The Escalation Tree
Most governance questions look like one of these. Know the answer in advance.
Manager handles directly:
- "Is it okay to paste these public competitor press releases into ChatGPT for analysis?"
- "Which prompt should we use for the weekly status update?"
- "Can we use Otter for our internal weekly meeting?"
- "Should we add this tool to the prompt library?"
Escalate to IT or Security:
- "Is this new tool approved for company use?"
- "What is our data retention policy for AI chat history?"
- "Someone on the team pasted customer data into a public tool — what now?"
- "Can we get SSO set up for this tool?"
Escalate to Legal:
- "Can we use AI to draft a contract?"
- "What disclosures do we owe customers when using AI in their data flow?"
- "Are we subject to the EU AI Act for this use case?"
- "Can we feed customer call recordings into a sales analytics tool?"
Escalate to HR:
- "Can we use AI for performance review scoring?"
- "What do we tell candidates about AI in the hiring process?"
- "Someone reported a colleague using AI inappropriately — how do we handle?"
Escalate to your skip-level:
- "Should we change our entire team's AI tool stack?"
- "We need budget for new AI seats"
- "I want to publish this AI ROI memo to the broader org"
Print this tree as page 2 of your team policy.
Change Management for New Tool Rollouts
Rolling out a new AI tool to your team is a change management exercise, not a procurement exercise. Use this checklist.
Week 0 — Decide. Vendor evaluation complete. Pilot complete. Approval from your skip-level for any new seats over your existing budget.
Week 1 — Announce. Team meeting. Walk through:
- Why we are adopting this tool
- What workflow it replaces or adds
- What it does NOT replace (be specific)
- The new entry in our acceptable-use policy
- The training plan and who is responsible for what
Weeks 2-3 — Train. One 30-minute live session. Hands-on with a real task. Each person leaves with their own real-work product made with the tool.
Weeks 4-6 — Use with support. Designated "office hours" where people can ask questions. Monitor adoption. Note friction.
Week 8 — Check in. Adoption metric, time-saved measurement, quality measurement. Decide: keep, adjust, retire.
Week 12 — Quarterly review. Tool is now part of standard operating procedure or it is gone.
Common Governance Failure Modes
1. The shadow stack. Half your team uses a tool you do not know about. They are pasting things they should not be pasting. Fix: ask, in 1:1s, what tools each report actually uses. Add the good ones to the policy. Replace the risky ones.
2. The policy nobody read. You wrote a six-page policy. Nobody finished it. Fix: cut it to one page. Walk through it in a team meeting. Have everyone acknowledge in writing.
3. The "we'll deal with it later" data leak. Someone pasted customer data into ChatGPT. You did not handle it because it felt minor. Fix: handle immediately. Document. Inform Security per company policy. Use it as a teaching moment, not a punishment.
4. The tool sprawl. Six AI tools, three paid, none used by more than two people. Fix: quarterly audit. Kill the bottom half. Reinvest in seats for the winners.
5. The "no AI is safer" trap. Banning AI on the team. Reports use personal accounts at home for work. You have lost both ROI and governance. Fix: have a sanctioned-use policy that is permissive enough to be the easier path.
The First-Month Governance Checklist
For a manager taking AI governance seriously for the first time, work through this list in your first 30 days:
- Read your company's AI policy
- Audit your team's actual AI tool usage (ask each report in 1:1)
- Pick the sanctioned tool stack (one general, one embedded, plus specialized as needed)
- Write the one-page team policy (use the template above)
- Walk through it in a team meeting
- Set up the prompt library (Module 2 lesson)
- Run the workflow mapping exercise (Module 2 lesson)
- Identify three workflows for ROI measurement
- Schedule the quarterly governance review on your calendar
- Document an escalation tree for your specific team
This list is 8-15 hours of work over a month. It is the foundational governance investment.
Key Takeaways
- The team-level AI policy is one page, opinionated, specific to your workflows — written and owned by you
- Use the 10-criterion vendor evaluation framework; do not adopt tools that score below 30
- Know the escalation tree: manager direct, IT/Security, Legal, HR, skip-level
- Roll out new tools as change management — announce, train, support, measure, decide
- Common failure modes: shadow stack, unread policy, ignored data leaks, tool sprawl, the "ban AI" trap
- The first-month governance checklist is 8-15 hours and the foundation of everything else in this course

