AGENTIC ORG CHART STARTER KIT (2026) Goal: Make AI-enabled work reliable at scale by assigning decision rights, verification levels, and required evidence. Use this for any workflow where AI drafts, recommends, or executes. 1) WORKFLOW INVENTORY (list 10–20 workflows) For each workflow, capture: - Name (e.g., “AI-drafted PRs for Billing service”) - Output type: (a) production code, (b) customer-facing text, (c) finance/reporting, (d) legal/policy, (e) internal decision memo, (f) ops/runbook - Frequency/week: - Blast radius if wrong: Low / Medium / High / Regulated - Current baseline metric: cycle time, CSAT, rollback rate, escalation rate, etc. 2) ASSIGN VERIFICATION LEVEL (L0–L4) L0 Draft-only: no shipping; brainstorming only. L1 Human spot-check: internal docs, low-risk comms. L2 Test + review: production code, runbooks; requires CI + codeowner. L3 Policy + audit trail: customer comms, finance; requires source links + policy checks. L4 Regulated approval: legal terms, PHI/PII; requires legal/compliance sign-off + retention. 3) DECISION RIGHTS (ownership follows blast radius) For each workflow, fill: - Approver of record (name + role) - Backup approver - “Stop-ship” authority (who can block release/send) - Escalation path (who is paged or notified) 4) QUALITY GATES (choose 3–5 per high-risk workflow) Examples: - Source-link requirement: every claim must cite an internal doc, ticket, or dashboard. - CI gate: tests + static analysis must pass before merge. - Sensitive-topic classifier: pricing/refunds/SLA/security language triggers approval. - Tool allowlist: agent may only call approved APIs/tools. - Change window: model or prompt updates only during scheduled windows. 5) AUDIT & LOGGING (minimum for L2–L4) Log fields: - timestamp, agent name/version, requester, workflow name - inputs (ticket ID, repo, doc IDs), tool calls, retrieved sources - output hash or stored output, verification level, approver identity Retention guideline: - L2: 30–90 days - L3: 90–365 days - L4: per compliance/legal requirement 6) METRICS (weekly review) Pick 4–6 metrics per workflow: - Speed: cycle time, time-to-first-draft - Quality: rollback rate, defect rate, policy violations - Customer impact: CSAT/NPS deltas, complaint rate - Risk: security findings, escalation rate Define thresholds that trigger action (e.g., +20% escalations week-over-week). 7) 90-DAY ROLLOUT CHECKLIST Weeks 1–2: Pick 2 workflows; set baselines; assign approvers; define gates. Weeks 3–6: Implement logging + gates; run limited pilot; weekly metric review. Weeks 7–10: Expand to 2–4 more workflows; add golden task suite for regressions. Weeks 11–13: Run one incident drill; publish “AI decision rights” doc; plan next quarter. Definition of Done: For any AI-enabled workflow, anyone in the company can answer in under 30 seconds: (1) Who approves it? (2) What evidence is required? (3) Where is the audit trail? (4) What metric tells us it’s working (or drifting)?