ICMD Agentic Feature Readiness Kit (2026) Use this kit to turn an “AI feature” into a revenue-grade, auditable workflow. 1) Pick the workflow (one only) - Name the workflow: ___________________________ - Trigger (what starts it?): _____________________ - Done state (what does “resolved” mean?): _______ - Frequency (weekly volume): ________ - Business owner (Ops/Finance/Support/Sec): ______ 2) Define the autonomy ladder (ship in this order) Tier A — SUGGEST: Agent drafts actions; human executes. - Required UX: draft output, citations/inputs shown, copy/export. Tier B — ASSIST: Agent executes tools only after approval. - Required UX: action preview, approve/deny, partial execution, safe defaults. Tier C — ACT: Agent auto-executes within policy. - Required UX: policy rules, kill switch, rollback, post-run report. 3) Policy + permissions template - Tools allowed (scoped): _______________________ - Data boundaries (what it must never access): ____ - Approval rules (examples): - If $ amount > ____ then require approval - If confidence < ____ then require approval - If record is enterprise/VIP then require approval - Logging: retention ___ days; PII redaction yes/no; export format ______ 4) Evaluation plan (minimum viable) Offline eval: - Collect 50–100 real historical tasks. - Define “correct” as structured fields + validators. Shadow mode: - Run in production without execution for ___ days. - Track: completion rate, top exception reasons, human edits. Gated rollout: - Start with Tier A for all users. - Promote to Tier B for a pilot group (___ accounts). - Promote to Tier C only when: policy violations < ___ per 1,000 runs AND rollback works. 5) Metrics and targets (CPRO-first) Define CPRO (Cost per Resolved Outcome): CPRO = (model cost + tool costs + human review time cost) / successful outcomes - Baseline human cost per outcome: $____ - Target CPRO: $____ (must be clearly below baseline) Operational metrics: - Outcome completion rate target: ____% - Human touches per outcome target: ____ - Policy violations target: ____ per 1,000 - p95 workflow latency target: ____ seconds 6) Incident and rollback runbook (must exist before Tier C) - Global pause automation button: owner _______ - Per-tool disable (OAuth scopes / API keys): owner _______ - Rollback method (undo action / bulk revert script): __________ - Customer comms template: where published (status page/email) ______ - Post-incident review checklist: - Root cause category: model / data / integration / permissions / policy - What guardrail failed? - What test would have caught it? - What policy change prevents recurrence? 7) Monetization mapping - What value metric matches the customer’s ROI? - Per resolved ticket / per invoice / per lead / per incident - Pricing sketch: - Base fee (governance + integrations): $____/month - Usage fee (per resolved outcome): $____ - Autonomy premium (Tier C): +$____ or +____% - Monthly value report fields: - Outcomes resolved: ____ - Hours saved: ____ - Error rate / rollback count: ____ - Estimated $ impact: ____ If you can fill this out in one sitting and defend the targets to a skeptical CFO or security lead, you’re ready to ship an agentic workflow that lasts.