← Back to Career Prep

Demo Deliverables — Draft

Working draft for Thu 2026-04-30 demo. Multiple options where useful.

Brief required: opportunity statement · 2–3 user stories · Jira-style ticket · metrics · UI mockup/prototype · validation plan.

UI mockup status: Andrea's annotated whiteboard screenshots already serve as the "before" — pair with 1–2 polished "after" views for History Logs redesign + inline confidence indicator on Account record. Don't redraw in Figma.
Deliverable 1

Opportunity Statement

Two framings — pick one or splice. Option A is presentation-friendly (dense, tight). Option B is narrative (better for the demo's opening 60 seconds).

Option A · Tight / mechanical

Problem: TC's AI Enrichment classifies, but doesn't show its work. RevOps can't explain classifications to legal; legal has nothing to hand to compliance; deals stall in RFPs that explicitly require source attribution.

Why it matters: ~$1.4M ARR explicitly tied to this gap (Signals 1, 3, 4, 5 + RFP exposure in Signal 8). At TC's stage (~$45M ARR estimate), that's a meaningful percentage of book concentrated in three accounts.

Opportunity: Capture provenance per classification (sources, model, confidence, reasoning) and surface it through three consumer-friendly views: inline indicator on Account record, confidence-prioritized review queue, exportable audit artifact. Cross-reference with D&B where available — captured once, shown three ways.

Why now: D&B is showing up in 2 RFPs in the last 60 days. Apex Data is offering a free SMB tier. The transparency gap is becoming a procurement filter.

Option B · Narrative / customer-voice-led

Across the eight signals, TC's customers — and prospects — are asking the same question in different forms: "Can I trust this classification?" Today's product gives them no way to answer.

A $380K Financial Services customer can't satisfy their legal team. A $290K customer is drowning in manual review. A $120K customer churned to D&B explicitly citing transparency. Two RFPs in the last 60 days now require source attribution as a procurement filter. The pattern is consistent: the AI's accuracy isn't the constraint — the inability to show its work is.

The opportunity is to capture provenance per classification (sources, model, confidence, reasoning, optional D&B agreement) once, and use that captured record to power three views: inline trust indicators on the Account record, a confidence-prioritized review queue, and an exportable audit artifact. The capture-layer engineering is the heavy work; the views make trust feel real to RevOps and exportable to legal.

This is a primitive, not a feature. It compounds with use, scales across enrichment types, and gives RevOps something to hand to legal so they can step out of the conversation.

Picking between them: Option B leads with customer pain and lands warmer with Bryan; Option A is faster and lands sharper with Ernesto. If pitching to all three panelists at once, lead with B's first paragraph, then pivot to A's structure for "why now / opportunity / scope."
Recommendation Spine

Three Stages, Six to Eight Weeks for Stages 1+2

Staging matches both the EM's verbal time constraint and the natural decoupling between capture work, integration work, and the learning loop.

Stage 1Capture & clarity
3–4 weeks

Capture per-classification metadata (model, settings, prompt version, inputs, outputs, sources, run state). Expose on existing OpenAI/Azure flows. History Logs redesign (timestamps, fields touched, sample outputs, integration status — per Andrea's spec). Default presets ("Fast / Balanced / Comprehensive"). Inline confidence on Account record. Solves Signals 1, 2 (partial), 6, 7 (verbatims), 8 (table-stakes).

Stage 2D&B synthesis
3–4 weeks

Pair LLM output with D&B classification where account has D-U-N-S match. Three states: agreement (high confidence), disagreement (review queue priority), no-D&B (single-source flag). 4-digit SIC prefix surfaces for legal; 8-digit detail for RevOps. Same captured record, richer view. Solves Signals 3, 5, 8 (full).

Stage 3 / H2Per-tenant memory loop
Bigger — defer

Customer corrections from review queue feed RAG context for next run. Per-tenant (the loop is a customer-owned proprietary asset). Lifecycle includes time-decay weighting, event-based invalidation, and pruning workflow. Audit log preserves full history regardless. Solves Signal 4 (the underlying problem, not the asked-for feature).

What's deliberately deferred: Coverage / accuracy gaps (Signal 2's "blank fields, software→Manufacturing" — separate problem space). Tier C inferential enrichment (buying intent, ICP scoring — no D&B equivalent, different confidence framework). Apex Data SMB-tier competition (different segment, different sale).
Deliverable 2

User Stories

Three stories covering the three personas the recommendation serves. Stage 1 stories are first; Stage 3 trajectory shown to demonstrate the architecture's reach.

Story 1 · RevOps reviewer — daily classification trust

As a RevOps reviewer
I want to see a confidence indicator on each classified Account record
So that I can spend my review time on the records that warrant scrutiny — and trust the rest without checking

Acceptance criteria

Story 2 · Legal / compliance — defensible export

As a legal or compliance reviewer (or RevOps responding on their behalf)
I want a date-filterable export of classification decisions with full provenance
So that I can satisfy audit requests and RFP transparency requirements without involving RevOps each time

Acceptance criteria

Story 3 · RevOps lead — review queue at scale

As a RevOps lead managing a team of reviewers
I want low-confidence and conflict records auto-prioritized into a review queue
So that my team's manual review time goes only to records where it actually changes the outcome

Acceptance criteria

Why this picks "the underlying problem" over the asked-for Signal 4 feature: the customer named "bulk override with confidence filtering." The actual problem is "manual review is killing turnaround time." The story above answers the problem with confidence-driven auto-queueing AND bulk approval at threshold — which is the right shape, not the literal feature request.
Deliverable 3

Jira-Style Ticket (Stage 1 Epic)

FieldValue
TitleCapture and expose classification provenance for AI Enrichment
TypeEpic
PriorityP0
QuarterQ3 — Stage 1 (3–4 weeks); Stage 2 follows in same window
ReporterAndrea Antal (PM candidate)
StakeholdersBryan Licas (CPO), Ernesto Valdes (CTO), Scott Wilton (Director of Product Design); customer reps from Signal 1, 4, 6 cohorts for validation

Description

Today's AI Enrichment writes a classification value to Salesforce custom fields. It does not capture or expose the provenance behind that value. Enterprise customers report this gap as a compliance blocker (Signal 1 — $380K FS), revenue risk (Signal 5 — $210K stalled), and churn driver (Signal 3 — $120K to D&B). Two of the last 60 days' RFPs (Signal 8) require source attribution as a procurement filter.

Stage 1 in-scope work

  1. Capture per-classification metadata at the OpenAI/Azure call boundary: timestamp, model, provider, reasoning_effort, verbosity, web_search_flag, prompt_version, full input payload, full output payload, sources cited, run_state.
  2. Persist metadata in a Salesforce-queryable store (custom object: AI_Classification_Log__c). One row per enrichment run per record.
  3. Surface inline confidence indicator on Account record page layout via Lightning Web Component.
  4. Surface drill-down panel showing model, settings, sources, reasoning. Designed per "subtract not add" — minimal default view, depth on demand.
  5. Provide JSON / HTML export of classification log filtered by date, confidence, account.
  6. Replace History Logs flow-viz dominance with timestamps, fields touched, sample outputs, integration status (per Andrea's whiteboard observation).
  7. Default model presets ("Fast / Balanced / Comprehensive") — collapse the raw Reasoning Effort × Verbosity × Web Search dimensions for non-power users.

Out of scope (tracked separately)

Dependencies

Success criteria (technical)

Risks

Deliverable 4

Metrics

Three tiers: technical (Stage 1 must-pass), product-outcome (Stage 1 + 2 in market), commercial (Stage 3 / multi-quarter).

Technical (Stage 1 must-pass)

MetricTargetWhy it matters
% of classifications with full provenance captured100%Capture quality — non-negotiable
Median Account record drill-down render time<1sRevOps daily-use quality
Export generation time (10K records, 1yr)<30sLegal hand-off latency

Product outcome (30 days post-Stage 1 ship)

MetricTargetWhy it matters
% of customers exporting audit artifact in first 30 days>40%Legal/compliance pull validating value
Reduction in "data quality vs. setup error" CS tickets−25% vs. baseline quarterSignals 2 + 6 outcome
Time to RFP response with source attribution<2 weeksSales cycle leading indicator (Signal 8)
Stage 2 readiness: D&B agreement rate on overlapping recordsReportable; gate at ≥70% before Stage 2 shipCalibration — surfaces whether trust-transfer mechanic is sound before betting on it

Commercial / Stage 3 leading indicators

MetricTargetWhy it matters
Correction corpus growth rateEstablish baseline Q1; trend over timeStage 3 readiness signal — proves the loop has fuel
% of new classifications hitting corpus precedentTrack Q2+Stage 3 effect — proves the loop is paying back
NRR uplift on cohort with 12+ months of corpus+5pts vs new-cohort baseline (12-mo lag)The flywheel proof point — measurable but slow
What's deliberately not on this list: classification accuracy (separate problem; Tier A enrichment is mostly solved). Sales-named features like "bulk override with confidence filtering" (the metric for that is review-queue time, which IS on the product-outcome list).
Deliverable 5

Validation Plan

Cheap-to-falsifiable steps before code; gated readiness checks between stages; honest qualitative + quantitative readout post-launch.

Pre-build (Week 0 — this week)

  1. Written brief to 3 customers. Send a 1-page recommendation summary to RevOps leads at the customers behind Signals 1, 4, and 6. Ask: "If you saw a confidence indicator and one-click drill-down on your records today, would you skip review or escalate? Would the export satisfy your legal team?" Falsifiable in 5 days.
  2. Internal CS interview. Pull the actual Signal 6 ticket data with the agent who handles industry-classification escalations. Confirm the +40% resolution time and the "predominantly industry classification" claim — sized correctly?

Mid-build (Week 3)

  1. Wireframe / prototype walkthrough. 2 RevOps leaders + 1 legal stakeholder. Capture: does the inline indicator answer the trust question? Does the export satisfy compliance review without RevOps in the loop?
  2. Internal Engineering review. Storage cost projection at 10× current run volume. Confirm AI_Classification_Log__c schema is stable and no managed-package collisions.

Post-launch (Stage 1 ship + 30 days)

  1. Quantitative: hit the metric targets above
  2. Qualitative: revisit Signal 1 customer's RevOps lead — "do you have a better answer for legal now?" Direct interview, not survey.
  3. Pipeline: track 2 RFP responses post-launch. Did D&B-style transparency requirement become non-blocking? If yes, sharpest commercial validation available.

Stage gates

Falsification — what would tell us we're wrong

Notes for Andrea

Things to Decide / Polish Before Demo