Working draft for Thu 2026-04-30 demo. Multiple options where useful.
Two framings — pick one or splice. Option A is presentation-friendly (dense, tight). Option B is narrative (better for the demo's opening 60 seconds).
Problem: TC's AI Enrichment classifies, but doesn't show its work. RevOps can't explain classifications to legal; legal has nothing to hand to compliance; deals stall in RFPs that explicitly require source attribution.
Why it matters: ~$1.4M ARR explicitly tied to this gap (Signals 1, 3, 4, 5 + RFP exposure in Signal 8). At TC's stage (~$45M ARR estimate), that's a meaningful percentage of book concentrated in three accounts.
Opportunity: Capture provenance per classification (sources, model, confidence, reasoning) and surface it through three consumer-friendly views: inline indicator on Account record, confidence-prioritized review queue, exportable audit artifact. Cross-reference with D&B where available — captured once, shown three ways.
Why now: D&B is showing up in 2 RFPs in the last 60 days. Apex Data is offering a free SMB tier. The transparency gap is becoming a procurement filter.
Across the eight signals, TC's customers — and prospects — are asking the same question in different forms: "Can I trust this classification?" Today's product gives them no way to answer.
A $380K Financial Services customer can't satisfy their legal team. A $290K customer is drowning in manual review. A $120K customer churned to D&B explicitly citing transparency. Two RFPs in the last 60 days now require source attribution as a procurement filter. The pattern is consistent: the AI's accuracy isn't the constraint — the inability to show its work is.
The opportunity is to capture provenance per classification (sources, model, confidence, reasoning, optional D&B agreement) once, and use that captured record to power three views: inline trust indicators on the Account record, a confidence-prioritized review queue, and an exportable audit artifact. The capture-layer engineering is the heavy work; the views make trust feel real to RevOps and exportable to legal.
This is a primitive, not a feature. It compounds with use, scales across enrichment types, and gives RevOps something to hand to legal so they can step out of the conversation.
Staging matches both the EM's verbal time constraint and the natural decoupling between capture work, integration work, and the learning loop.
Capture per-classification metadata (model, settings, prompt version, inputs, outputs, sources, run state). Expose on existing OpenAI/Azure flows. History Logs redesign (timestamps, fields touched, sample outputs, integration status — per Andrea's spec). Default presets ("Fast / Balanced / Comprehensive"). Inline confidence on Account record. Solves Signals 1, 2 (partial), 6, 7 (verbatims), 8 (table-stakes).
Pair LLM output with D&B classification where account has D-U-N-S match. Three states: agreement (high confidence), disagreement (review queue priority), no-D&B (single-source flag). 4-digit SIC prefix surfaces for legal; 8-digit detail for RevOps. Same captured record, richer view. Solves Signals 3, 5, 8 (full).
Customer corrections from review queue feed RAG context for next run. Per-tenant (the loop is a customer-owned proprietary asset). Lifecycle includes time-decay weighting, event-based invalidation, and pruning workflow. Audit log preserves full history regardless. Solves Signal 4 (the underlying problem, not the asked-for feature).
Three stories covering the three personas the recommendation serves. Stage 1 stories are first; Stage 3 trajectory shown to demonstrate the architecture's reach.
As a RevOps reviewer
I want to see a confidence indicator on each classified Account record
So that I can spend my review time on the records that warrant scrutiny — and trust the rest without checking
As a legal or compliance reviewer (or RevOps responding on their behalf)
I want a date-filterable export of classification decisions with full provenance
So that I can satisfy audit requests and RFP transparency requirements without involving RevOps each time
As a RevOps lead managing a team of reviewers
I want low-confidence and conflict records auto-prioritized into a review queue
So that my team's manual review time goes only to records where it actually changes the outcome
| Field | Value |
|---|---|
| Title | Capture and expose classification provenance for AI Enrichment |
| Type | Epic |
| Priority | P0 |
| Quarter | Q3 — Stage 1 (3–4 weeks); Stage 2 follows in same window |
| Reporter | Andrea Antal (PM candidate) |
| Stakeholders | Bryan Licas (CPO), Ernesto Valdes (CTO), Scott Wilton (Director of Product Design); customer reps from Signal 1, 4, 6 cohorts for validation |
Today's AI Enrichment writes a classification value to Salesforce custom fields. It does not capture or expose the provenance behind that value. Enterprise customers report this gap as a compliance blocker (Signal 1 — $380K FS), revenue risk (Signal 5 — $210K stalled), and churn driver (Signal 3 — $120K to D&B). Two of the last 60 days' RFPs (Signal 8) require source attribution as a procurement filter.
timestamp, model, provider, reasoning_effort, verbosity, web_search_flag, prompt_version, full input payload, full output payload, sources cited, run_state.AI_Classification_Log__c). One row per enrichment run per record.TC-XXX)TC-XXX, with its own design discovery for forgetting / pruning / multi-tenancy / GDPR)AI_Classification_Log__cThree tiers: technical (Stage 1 must-pass), product-outcome (Stage 1 + 2 in market), commercial (Stage 3 / multi-quarter).
| Metric | Target | Why it matters |
|---|---|---|
| % of classifications with full provenance captured | 100% | Capture quality — non-negotiable |
| Median Account record drill-down render time | <1s | RevOps daily-use quality |
| Export generation time (10K records, 1yr) | <30s | Legal hand-off latency |
| Metric | Target | Why it matters |
|---|---|---|
| % of customers exporting audit artifact in first 30 days | >40% | Legal/compliance pull validating value |
| Reduction in "data quality vs. setup error" CS tickets | −25% vs. baseline quarter | Signals 2 + 6 outcome |
| Time to RFP response with source attribution | <2 weeks | Sales cycle leading indicator (Signal 8) |
| Stage 2 readiness: D&B agreement rate on overlapping records | Reportable; gate at ≥70% before Stage 2 ship | Calibration — surfaces whether trust-transfer mechanic is sound before betting on it |
| Metric | Target | Why it matters |
|---|---|---|
| Correction corpus growth rate | Establish baseline Q1; trend over time | Stage 3 readiness signal — proves the loop has fuel |
| % of new classifications hitting corpus precedent | Track Q2+ | Stage 3 effect — proves the loop is paying back |
| NRR uplift on cohort with 12+ months of corpus | +5pts vs new-cohort baseline (12-mo lag) | The flywheel proof point — measurable but slow |
Cheap-to-falsifiable steps before code; gated readiness checks between stages; honest qualitative + quantitative readout post-launch.
AI_Classification_Log__c schema is stable and no managed-package collisions.