Building Your AI-Augmented Close Workflow and Controls
You now have prompts for every stage of the close. The final lesson assembles them into a workflow you can run every month, with the controls and documentation you need to defend that workflow to your auditor and your board. This is where the time saving compounds — not in any single prompt, but in the way they connect.
What You'll Learn
- A day-by-day AI-augmented close calendar
- The four controls every AI-assisted finance function should have
- How to document AI use in a way that satisfies auditors
- The 90-day adoption roadmap for your team
A Day-by-Day Close Calendar
Below is what a mature AI-augmented close looks like on a 10-business-day cycle. Adjust the days to your own timeline.
Day -3 to Day 1: Pre-close
- Run the mid-month anomaly detection sweep on first-half journals (from the ledger anomaly lesson)
- Update your prompt library with any tuning from last month
Day 1: Hard close cut-off
- Run the cut-off review prompt across last 5 days of prior month and first 3 days of new month
- Surface issues to preparers same day
Day 2 to 3: Subledgers and reconciliations
- AI-assisted bank reconciliation triage on each major account (from the reconciliations lesson)
- AI-drafted reconciliation memos for every working paper
- Anomaly detection on the full journal listing
Day 3 to 4: Accruals and adjustments
- AI-assisted accrual memos for all recurring accruals
- AI-assisted GR-IR and unbilled review
- Cut-off final pass
Day 4 to 5: Trial balance and flux
- Master flux prompt run across full P&L
- Variance bridge prompt for P&L variance to budget
- Balance sheet flux on top 10 movements
Day 6 to 7: Management reporting
- Executive summary draft using the board pack prompt
- KPI dashboard commentary draft
- Risks and outlook draft (with bear/bull case discipline)
Day 8: CFO review
- CFO reads the AI-assisted draft
- Edits and finalises
- Significantly shorter review cycle because draft quality is high and consistent
Day 9: Final pack and distribution
- Board pack finalised
- Distributed to the board with two clear days before the meeting
Day 10: Buffer / regulatory drafting kick-off
- For quarter-end periods: kick off MD&A drafting with the regulatory prompts
Compare this to a typical pre-AI close calendar — same outputs, often 2 to 3 days more compressed. The compression comes from the drafting bottleneck shifting earlier.
The Four Controls You Need
For AI use to be defensible inside a finance function, you need four controls. None of them are heavy. All of them should be in place by month three.
Control 1 — Approved tools list. A short written policy listing the AI tools your finance team is authorised to use, with the tier and tenant. Update quarterly. Distribute to the team. When in doubt, escalate to your IT or security counterpart before using a new tool.
Control 2 — Data classification rules. From Lesson 2, the four data categories and what tier of tool is acceptable for each. Pinned to the wall of every finance team chat. Reinforced in monthly stand-ups for the first 90 days of adoption.
Control 3 — Review and sign-off. Every AI-assisted output that becomes part of an external document — audit memo, regulatory disclosure, board pack — is reviewed and signed off by a qualified human. The reviewer is named. The sign-off is captured in your close checklist.
Control 4 — Audit log of material AI use. A simple spreadsheet listing: date, task, tool used, who reviewed, what was produced. This does not need to capture every prompt — just material uses. By the time your auditor asks "where did you use AI?", you have a one-page answer.
These four controls take a half-day to set up. They protect you for the next decade of AI use.
Documenting AI Use for Auditors
By 2027 every external audit firm will be asking about AI use in the close. The defensible answer is:
"We use [tools] on [Business / Team / Enterprise] tier with prompts excluded from training. We use AI to draft narratives, summarise documents, and surface candidates for human review. All AI-assisted outputs that flow into external documents are reviewed and signed off by a qualified human. We maintain an internal log of material AI use, available on request. We do not use AI to calculate amounts, select audit samples, or perform reconciliations."
If you can say all of that truthfully, your auditor will be satisfied. If any clause is not true today, work toward it over the next two quarters.
The 90-Day Adoption Roadmap
For finance functions starting from zero, here is a realistic adoption pace.
Days 1 to 30 — Single user pilot.
- One controller or FP&A manager runs three workflows: flux commentary, accrual memos, anomaly detection
- Build a personal prompt library
- Track time saved per task
Days 31 to 60 — Team rollout for low-risk tasks.
- Extend to the whole finance team for commentary drafting and reconciliation narrative
- Approve tools list
- Publish data classification rules
Days 61 to 90 — Full close integration.
- AI prompts embedded in close checklist
- Board pack drafting fully AI-assisted
- First audit log entries captured
- First measurement of close cycle time vs pre-AI baseline
By day 90 your close calendar should be 1 to 2 days shorter than baseline. By day 180 you should be at the full 30 to 50 percent time saving with controls in place.
Common Failure Modes to Avoid
Failure 1 — Rolling out everything in week one. Pilot one workflow with one person. Expand incrementally. Most failures come from trying to retrain a whole team simultaneously.
Failure 2 — Skipping the controls. The team that does not put the four controls in place will have one incident — usually a junior staffer pasting payroll data into a consumer tool — and the AI program will be shut down for a year. Controls first, then scale.
Failure 3 — Measuring the wrong thing. Do not measure "AI prompts run". Measure close cycle time, hours per finance person, and number of last-night edits to the board pack. The first metric is vanity; the others are real outcomes.
Failure 4 — Letting prompts become orphans. Without a shared prompt library, each team member reinvents prompts. A shared library, owned by one person who curates updates, multiplies the value across the team.
Failure 5 — Treating AI output as final. Every output is a draft. The team that signs off without reading produces errors that are obvious in retrospect — and that erode confidence in the program. Reading and editing the draft is the discipline that keeps quality high.
A Final Mindset
The teams that get the most value from AI in the close share one belief: AI is a junior accountant who never sleeps, reads faster than humans, writes in your preferred voice, and always needs review. It is not a replacement for finance skill. It is a force multiplier for finance skill.
Your job as the senior reviewer becomes more important, not less. The numbers still need to tie. The commentary still needs to ring true. The board pack still needs to land. AI compresses the time it takes to get to a good draft. You still own the result.
Your Next Steps After This Course
- Open your AI tool of choice and run the flux commentary prompt on last month's actuals. Time yourself. Compare to your manual baseline.
- Build your prompt library document. Copy the 10 to 12 prompts from this course as starting templates.
- Schedule a 30-minute team meeting to walk through the four controls. Draft the approved tools list together.
- Pick one workflow to standardise across the team for next month's close. Make it the flux commentary or the reconciliation narrative — both are low-risk and high-leverage.
- Take the final exam below to consolidate your learning, then come back to this course any time you want to refresh a specific workflow.
Key Takeaways
- A mature AI-augmented close has AI integrated at every stage, not bolted on at the end
- Four controls keep your AI program defensible: tools list, data rules, sign-off, audit log
- The defensible auditor answer is "AI assisted, human signed off, prompts excluded from training"
- 90-day rollout: 30 days single user, 30 days low-risk team rollout, 30 days full integration
- Measure outcomes — close cycle time, hours per person, last-night edits — not prompt counts

