← Back to Career Prep

2026-04-26 — Demo prep working session

Date: Sun 2026-04-26
Demo: Thu 2026-04-30 @ 2:00 PM PT (Bryan Licas / CPO, Ernesto Valdes / CTO, Scott Wilton / Director of Product Design)
Brief: Traction Complete/Product_Management_2026_Demo.pdf
CSV: Traction Complete/sandbox_test_accounts.csv — 30 well-known companies, no industry set

What this session produced

Note for future-me: overview.md:74 still says "Data already loaded (43k matched leads, 66k processed leads visible on Complete Home dashboard)." That's the stale claim — fix when next in there or re-prompt Claude.

Process note: Claude initially went on a long debugging tangent about whether sandbox data had been wiped, instead of reading the prep doc that already documented the prior state. Lesson: when prompting Claude in this repo, point it at the relevant prep doc up front. The first 4 turns of this session were wasted on speculation.


Feedback / synthesis

1. Solution constraints — reframe from "Salesforce-only" to the real walls

The framing "Salesforce-only" hides the better question. TC's identity is Salesforce-native UX, not "everything must be built inside Salesforce." The product already crosses that line — OpenAI does the classification, web/third-party sources feed the enrichment.

Real binding constraints:

Constraint Implication
RevOps user must stay in Salesforce Anywhere the user does work (review, override, audit, config) lives in SF. Don't propose a separate web app. This is the wedge.
6–8 weeks of eng capacity Hardest binding constraint. Anything that needs new infra, new data partnerships, or a new AI model is out.
Don't try to out-data D&B TC lost deals because of transparency, not because their data was worse. Don't pitch "better classification accuracy" — pitch "explainable + auditable + correctable in your workflow."
OpenAI is already the classifier Change the prompt, capture more metadata in the response, log the reasoning. Don't swap models.

What this opens up:

2. Transparency needs substrate AND surface, both

Sharpening: transparency that doesn't reduce the user's work is fake transparency. Signal 1's customer says "I honestly don't have a good answer for them, and neither does RevOps." The user isn't legal. The user is RevOps, stuck as human middleware between an opaque AI and a demanding compliance team. D&B's "data transparency report" wins because RevOps can hand it to legal and step out of the conversation. That's the bar.

This reframes Signal 4 too: bulk override + confidence filtering is the same theme — "let me trust the easy 80% and focus on the edge 20%." Both signals are about reducing RevOps cognitive load.

3. Three concrete surfaces on top of one substrate

The recommendation probably has:

  1. Inline explainability on the account record (sources, confidence, why this classification)
  2. Confidence-filtered bulk review queue (Signal 4 — review only the edge cases)
  3. Exportable transparency artifact (PDF/report that RevOps can hand to legal/compliance — Signals 3, 5, 8)

All three sit on one substrate: capture-and-store layer for sources/confidence/reasoning from each enrichment run. Substrate is the eng-heavy part; surfaces are what make it feel real.

This structure speaks to all three panelists: customer voice (Bryan), data model + audit trail (Ernesto), UX thinking with named user persona (Scott).

4. Signal confidence tiering — what to flag, what to set aside

The brief is practically daring you to do this. "We're more interested in how you think through the data than what you ultimately land on" + Bryan's last line about candidates who say "what they chose, what they didn't, and why."

Signal Confidence Why
6 — CS ticket data, +40% resolution time High direction, fuzzy magnitude The 40% is solid, but "predominantly industry classification" could be 55% or 90% — big difference for sizing. Worth naming.
8 — D&B in RFPs (last 60 days, n=2) High Externally verifiable, recent, specific. Strongest competitive signal.
1 — $380K Financial Services High on problem, low on urgency Concrete pain. But locked through 2027. Churn risk is zero; expansion is ~18 months out. Trust the problem, discount the "this is hot" energy.
3 — Churned $120K mid-market Directional Post-churn interviews are gold for real reasons — but also where customers rewrite their narrative. Told their team "APAC coverage." "Real reason was compliance" only emerges in post-mortem. n=1; needs Signal 8 to triangulate (it does).
5 — $210K stalled deal Trust the deal, question the trend claim Deal stall is real. "This is becoming a standard ask" is the AE's impression — sales has incentive to escalate. Signal 8's RFP data corroborates independently, which gives the trend claim teeth.
2 — $55K SaaS, blank fields, software→Manufacturing Low — and a different problem Customer's own diagnosis is uncertain ("not sure if data quality issue or if we set something up wrong"). Pain is real, but it's about coverage/accuracy, not transparency. Mixing it muddies things.
4 — $290K, "bulk override with confidence filtering" Trust the problem, push back on the feature Classic solution-disguised-as-need. The actual problem is "manual review is killing turnaround time." Customer named one solution. Don't build what they asked for; build the solution to their problem.
7 — VoC survey, n=15 Low — and the brief tells you so n=15, 2 enterprise (and enterprise is exactly who's at risk per every other signal). Question conflated coverage with correctness; 60% headline is meaningless. The verbatim re-read is the only usable piece. Brief is openly setting this up as a "do you spot the bad data?" test.

Plus the two non-signal inputs:

Three to most explicitly address in the "what I set aside" moment:

  1. Signal 7 (the survey) — calling out the 60% number as misleading and re-anchoring on the verbatims demonstrates exactly the data literacy Ernesto + Scott will both notice
  2. Signal 2 (different problem) — naming that you're scoping out coverage/accuracy because it's a separate problem worth its own H2 work shows judgment, not avoidance
  3. Signal 4's specific feature ask — "the customer asked for X; the underlying problem is Y; here's why I'm building for Y" is textbook PM thinking and Bryan will love it

There's a rhythm in the deck: name the strong signals, name the weak ones, name the ones you're deferring with reasons. That's "what they chose, what they didn't, and why" answered structurally before they have to ask.

5. Working window: Sun → Thu

Day Goal
Sun Apr 26 (light) Capture brief into demo-prep.html. Skim the 6 TC help articles linked in assignment so Mon isn't spent learning the UI. Don't start the sandbox import today — battery for the real days.
Mon Apr 27 (full) (1) Sandbox: import the 30-company CSV, run AI enrichment on 10–20 first, then full 30. Log observations in real time. (2) Re-read the 8 signals. (3) Land core problem + recommendation by EOD. One-pager / outline locked.
Tue Apr 28 (full) Build the 4 deliverables: opportunity statement, 2–3 user stories, Jira-style ticket, metrics, UI mockup/prototype, validation plan. First end-to-end dry run aloud.
Wed Apr 29 (full) Second dry run, time it (target ~20 min for sections 1–4; sandbox walkthrough is its own 15-min block). Refine weak sections. Slide deck if using. Backup screen recording. Prep questions for each panelist.
Thu Apr 30 AM Light review. Sandbox + deck open by 1:30. Demo at 2:00 PM PT.

Assumptions to flag if wrong:
- OK presenting from a doc/Notion + sandbox tabs vs. needing a full slide deck (brief explicitly says no polished deck needed)
- Mockup tool: v0 / Figma / annotated screenshots — pick what's fastest
- Mon's sandbox import is first thing so AI enrichment results (which take time + inform POV) are in hand before finalizing the recommendation


Session transcript — Claude's actual outputs

Q1: "there's no 7 day plan - today is Sunday so feasibly I'm doing Mon Tue Wed full days while Nathan is at daycare"

Right. Today's Sunday Apr 26, demo is Thu Apr 30 at 2pm PT. So your real working window is:

That's 3 full days + bookends. Here's a proposed shape — push back where it doesn't fit:

Day Goal
Sun (light) Capture the brief into demo-prep.html. Skim the 6 TC help articles linked in the brief (AI Enrichment Flow, Assignment Flows, History Logs, etc.) so Mon isn't spent learning the UI. Don't start the sandbox import today — battery for the real days.
Mon (1) Sandbox: import the 30-company CSV, run AI enrichment on 10–20 first, then full 30. Log observations in real time. (2) Re-read the 8 signals. (3) Land your core problem + recommendation by EOD. One-pager / outline locked.
Tue Build the 4 deliverables: opportunity statement, 2–3 user stories, Jira-style ticket, metrics, UI mockup/prototype, validation plan. First end-to-end dry run aloud.
Wed Second dry run, time it (target ~20 min). Refine weak sections. Slide deck if you're using one. Backup screen recording of sandbox walkthrough in case live fails. Prep questions for each panelist.
Thu AM Light review. Sandbox + deck open by 1:30. Demo at 2:00.

Things this assumes — flag if wrong:
- You're OK presenting from a doc/Notion + sandbox tabs, vs. needing a full slide deck (the brief explicitly says no polished deck needed)
- Mockup tool: v0 / Figma / annotated screenshots — which is fastest for you?
- Mon's sandbox import is first thing so the AI enrichment results (which take time + inform your POV) are in hand before you finalize the recommendation

Want me to update demo-prep.html now — replace the placeholder with the captured brief, fix the activation note, and replace the 7-day plan with this 3-day version?

Q2: "whats your take on solution bounds/constraints? should it be limited to something that can only be done within salesforce?"

The framing "Salesforce-only" hides the better question. TC's identity is Salesforce-native UX, not "everything must be built inside Salesforce." The product already crosses that line — OpenAI does the classification, web/third-party sources feed the enrichment. Constraining yourself to a Force.com-only solution would be a self-imposed wall the actual product doesn't live behind.

The real constraints I'd anchor to:

Constraint Implication
RevOps user must stay in Salesforce Anywhere the user does work (review, override, audit, config) lives in SF. Don't propose a separate web app. This is the wedge.
6–8 weeks of eng capacity The hardest binding constraint. Forces ruthless scoping. Anything that needs new infra, new data partnerships, or a new AI model is out.
Don't try to out-data D&B TC lost the deals because of transparency, not because their data was worse. Don't pitch "better classification accuracy" — pitch "explainable + auditable + correctable in your workflow."
OpenAI is already the classifier You can change the prompt, capture more metadata in the response, log the reasoning. You don't need to swap models.

What this opens up that pure-Salesforce framing forecloses:
- A "data transparency report" artifact (likely PDF or doc) that's generated outside SF and surfaced inside it — totally legitimate
- Licensing an external taxonomy (SIC/NAICS lookup, even a free one like Census NAICS) and surfacing it as a "second opinion" alongside TC's classification — cheap win
- Capturing sources/confidence from the OpenAI call (prompt → response → metadata) and storing it as Salesforce custom fields — the data is external, the display is native

The challenge for your recommendation: the signals point overwhelmingly at one theme (transparency/explainability — Signals 1, 3, 5, 6, 8 directly; 7 indirectly). Signal 4 (bulk override + confidence filtering) is the same problem viewed from a different user. Signal 2 (blank fields, wrong classifications) is genuinely a different problem — data quality, not transparency. A clean recommendation probably picks the transparency cluster and explicitly defers the coverage/accuracy problem with a stated reason. Trying to solve both in 6–8 weeks is the failure mode.

So: not "Salesforce-only" — "Salesforce-native UX, 6–8 weeks, build the explainability layer, don't rebuild the data layer."

Q3: "I was also thinking that solution not only has to be transparent but has to feel transparent to users… perception of successful change by users is 80% of the work I would say"

You're right, and that's a strong instinct — but let me sharpen it because the "80%" framing has a failure mode you'll want to dodge with Ernesto on the panel.

The right framing: transparency needs substrate AND surface, and neither alone works.

So the product structure isn't "transparent system + UX polish" — it's two coordinated layers, and the recommendation needs to name both.

The other sharpening I'd make to your point: transparency that doesn't reduce the user's work is fake transparency. Look at Signal 1 again — the customer says "I honestly don't have a good answer for them, and neither does RevOps." The user isn't legal. The user is RevOps, who's stuck being the human middleware between an opaque AI and a demanding compliance team. D&B's "data transparency report" wins not because it's beautiful, but because RevOps can hand it to legal and step out of the conversation. That's the bar.

This reframes Signal 4 (bulk override + confidence filtering) too. It's the same theme: the user wants to not have to review every record. Transparency = "I can trust the easy 80%, focus on the edge 20%." Both signals are about reducing RevOps cognitive load, not just exposing internals.

Practical implication for the recommendation: the deliverable probably has three concrete surfaces — (1) inline explainability on the account record, (2) confidence-filtered bulk review queue, (3) an exportable transparency artifact (PDF or report) that RevOps can hand to legal/compliance. All three sit on top of the same substrate. The substrate is the eng-heavy part; the three surfaces are what make it feel real.

When you pitch this on Thursday: lead with the user (RevOps stuck in the middle), name substrate + surface as two layers, show all three surfaces in the mockup. That covers Bryan (customer voice), Scott (UX thinking), and Ernesto (data model). The "feels transparent" framing wins all three panelists if you don't strip out the substrate.

Q4: "do you think there were any inputs that are low confidence or to take with potential disclaimers?"

Yes — and the brief is practically daring you to do this. "We're more interested in how you think through the data than what you ultimately land on" + Bryan's last line about candidates who say "what they chose, what they didn't, and why." Naming the weak signals explicitly is one of the highest-leverage moves you can make in section 1 (Problem framing + prioritization).

Here's how I'd tier them:

Signal Confidence Why
6 — CS ticket data, +40% resolution time High direction, fuzzy magnitude The 40% is solid, but "predominantly industry classification" is hand-wavey. Could be 55% or 90% — that's a big difference for sizing. Worth naming.
8 — D&B in RFPs (last 60 days, n=2) High Externally verifiable, recent, specific. Strongest competitive signal in the brief.
1 — $380K Financial Services High on problem, low on urgency Direct customer quote, concrete pain. But: locked through 2027. Their churn risk is zero; expansion is ~18 months away. Trust the problem, discount the "this is hot" energy.
3 — Churned $120K mid-market Directional Post-churn interviews are gold for real reasons — but they're also where customers rewrite their own narrative. They told their team "APAC coverage." The "real reason was compliance" comes out in post-mortem. Could be honest, could be rationalization. n=1; needs Signal 8 to triangulate (it does).
5 — $210K stalled deal Trust the deal, question the trend claim The deal stall is real. But "this is becoming a standard ask" is the AE's impression — sales has incentive to escalate. Signal 8's RFP data corroborates it independently, which is what gives the trend claim teeth. Without Signal 8, this would be one AE's hunch.
2 — $55K SaaS, blank fields, software→Manufacturing Low — and it's a different problem Customer's own diagnosis is uncertain ("I'm not sure if this is a data quality issue or if we set something up wrong"). The pain is real, but it's about coverage/accuracy, not transparency. Mixing it into the transparency story muddies things.
4 — $290K, "bulk override with confidence filtering" Trust the problem, push back on the feature Classic solution-disguised-as-need. The actual problem is "manual review is killing turnaround time." The customer named one solution. Don't build what they asked for; build the solution to their problem.
7 — VoC survey, n=15 Low — and the brief tells you so n=15, 2 enterprise (and enterprise is exactly who's at risk per every other signal). Question conflated coverage with correctness, so the 60% headline is meaningless. The verbatim re-read is the only usable piece. The brief is openly setting this up as a "do you spot the bad data?" test.

Plus the two non-signal inputs:

The three I'd most explicitly address in your "what I set aside" moment:

  1. Signal 7 (the survey) — calling out the 60% number as misleading and re-anchoring on the verbatims demonstrates exactly the data literacy Ernesto + Scott will both notice
  2. Signal 2 (different problem) — naming that you're scoping out coverage/accuracy because it's a separate problem worth its own H2 work shows judgment, not avoidance
  3. Signal 4's specific feature ask — "the customer asked for X; the underlying problem is Y; here's why I'm building for Y" is textbook PM thinking and Bryan will love it

There's a rhythm to this in the deck: name the strong signals, name the weak ones, name the ones you're deferring with reasons. That's the "what they chose, what they didn't, and why" line answered structurally before they have to ask.