AI-FIRST LEADERSHIP STACK (2026) 90-DAY ROLLOUT CHECKLIST (Copy/Paste) Goal: Increase throughput (cycle time, quality, customer impact) using AI tools while maintaining security, auditability, and clear accountability. A) DEFINE THE BOUNDARIES (Week 1) 1) Name the approved tools (enterprise accounts only): IDE copilot, chat assistant, internal RAG, etc. 2) Define data classes in 1 page: - Public: OK anywhere. - Internal: OK only in approved enterprise tools. - Restricted: never paste into external tools (PII, credentials, customer data, unreleased financials, source code for regulated systems). 3) Decide retention/training settings for each vendor account (opt-out of training where available). 4) Publish a simple “If you’re unsure, do this” escalation path (Slack channel + named owner). B) INSTRUMENT THE WORK (Weeks 2–3) 5) Add PR template fields: - AI used (Y/N), tool used, data class, verification steps. 6) Update Definition of Done: - Tests required, lint/SAST required, dependency scan required. 7) Establish baselines (pre-rollout numbers): - Lead time for changes, deployment frequency, change failure rate, MTTR. - Escaped defects per release. - Support escalation rate + CSAT. C) RUN TWO PILOTS (Weeks 4–8) 8) Select two pilots: one engineering team + one GTM/support workflow. 9) Require weekly demos: what was faster, what was riskier, what policies were unclear. 10) Track 5 metrics weekly: - Cycle time delta (% change). - Review time per PR. - Defect/incident delta. - Policy violations per 1,000 interactions. - Customer-impact metric (CSAT/NPS/refunds/escalations). D) CODIFY PATTERNS (Weeks 9–10) 11) Create a prompt/playbook library for common tasks: - Test generation, refactors, incident summaries, customer response drafts. 12) Build “golden path” repo templates: - CI checks, security scanning, codeowners, PR template. 13) Establish a “high-risk change” rule: - Auth/crypto/payments/PII changes require an explicit security review. E) SCALE SAFELY (Weeks 11–13) 14) Training: 45 minutes per function (engineering, PM, support). Focus on: - Data boundaries, citation hygiene, verification routines. 15) Launch lightweight audits: - Sample 10 PRs/month and 20 support responses/month. Score accuracy, security, and completeness. 16) Make accountability explicit: - “The person who ships/merges is responsible for verification.” F) QUARTERLY REVIEW (Ongoing) 17) Decide keep/kill/expand based on outcomes, not adoption: - Keep if cycle time improves AND quality doesn’t degrade. - Kill or redesign if defect/incident rates rise materially. 18) Update policies after any incident (treat AI issues like production incidents). 19) Reassess tooling annually or when vendor terms change. Deliverable at Day 90: A one-page AI operating policy + measured productivity and quality deltas + a standardized set of workflows that make the safe path the easy path.