ICMD AI CODING ASSISTANTS ROLLOUT PLAYBOOK (TEAM CHECKLIST) Use this checklist to roll out AI coding assistants (Cursor, GitHub Copilot, Replit, or similar) without trading speed for reliability. 1) Define your goals (pick 2–3) - Reduce cycle time (idea → merged PR) by X% (e.g., 20%) - Increase test coverage in key services by Y% (e.g., +10 points) - Reduce on-call incidents by Z% while maintaining shipping pace - Improve onboarding time for new engineers by N days 2) Choose the initial scope (start narrow) - Select 1–2 teams and 1–2 repos (avoid the most mission-critical repo first) - Select 2–3 use cases: test generation, refactors, scaffolding, docs, debugging - Set a pilot length: 30–45 days 3) Establish access and identity controls - Require SSO for paid tools; disable personal accounts for company repos - Document who is allowed to use what (contractors, interns, vendors) - Decide whether prompts/usage logs are retained and who can audit them 4) Set data handling rules - Prohibit secrets in prompts (API keys, tokens, private certs) - Decide whether proprietary code can be sent to third-party inference - Create a “red list” of repos/modules (payments, auth, PII) requiring stricter rules 5) Standardize the prompting format (make intent explicit) Use a shared template: - Context: language, framework, repo conventions - Constraints: “no new deps,” “must preserve API,” latency budget, security rules - Acceptance criteria: tests, expected outputs, edge cases - Output format: patch/diff style, list of files changed, or step-by-step plan 6) Update engineering standards for AI-generated code - AI code must pass the same CI gates (lint, unit tests, type checks) - Require readable diffs: prefer small PRs; avoid 1,000-line “AI dumps” - Add a note in PR description: what was AI-assisted and how it was verified 7) Harden your review process - Create a reviewer checklist for sensitive changes: * Auth/session changes: threat model + negative tests * Payments/billing: idempotency, retries, audit logs * Data migrations: backfill plan, rollback plan, metrics - Require domain-owner review for high-risk modules 8) Add automated guardrails - Enable secret scanning (e.g., GitHub Advanced Security) and dependency scanning - Use SAST tools (Semgrep/Snyk) and enforce fail-on-high severity for core services - Add pre-commit hooks for formatting and basic checks to reduce review churn 9) Train the team (30–60 minutes is enough to start) - Teach 3 high-ROI patterns: (a) ask for a plan before code (b) generate in small chunks (c) bind output to acceptance tests - Share 5 “golden prompts” from your own codebase to set conventions 10) Measure outcomes and decide on expansion Track these weekly during the pilot: - Cycle time: median time from first commit to merge - Review load: comments per PR, review rounds, PR size distribution - Quality: bug count, incident count, severity-weighted on-call pages - Adoption: active users/week, which use cases provide wins Expansion rule of thumb: - If cycle time improves ≥15% without increasing incidents, expand to more repos. - If incidents rise, tighten constraints: smaller diffs, stronger tests, stricter review. End state: AI becomes a normal co-author, and your process (tests + review + scanning) becomes the governor that keeps speed and safety in balance.