← Career Prep

Bryan Follow-Up Prep

Traction Complete · 30-min Zoom · Thu May 7, 2026 @ 2:00 PM PT

Interviewer
Bryan Licas · CPO · Hiring Manager
Karen's Framing
"Dig a bit deeper into the non-technical side of product thinking"
Likely Spine
Expanding on your closing question on bundled AI pricing
Time Budget
30 min · ~3–4 substantive blocks
Read

What This Round Probably Is

Karen's "non-technical" framing is consistent with one specific thread: the closing question you asked at the demo panel about bundling AI into product pricing instead of the BYO-API-key model. The room got "directionally in lock step" excited — which usually means an internal debate that's already live. Bryan's catch-up is likely a chance to pull on that thread with you one-on-one.

Frame The exercise was disclosed as "closely aligned with real internal work." If your closing question landed because they're actively debating it, the bar in this round shifts from "did she get it right?" to "do we want her in this argument with us?" Show up like a future teammate weighing in, not a candidate trying to impress.
30 Min

Pacing

BlockTimeNotes
Warm-up & demo retro ~3 min "How did the demo feel for you?" Have a short, honest reflection ready.
The pricing thread ~15–18 min This is the meat if it's where he goes. Diagnostic first, prescriptive on request, defend under probing.
Your questions ~6–8 min Pick 2–3. Lean toward inviting the internal debate.
Wrap / logistics ~2 min If start-date / Hungary surfaces, see logistics section.
Discipline If Bryan is testing your thinking on the pricing question, give yourself permission to actually think out loud — not deliver a script. He'll respect "let me work through this" more than a memorized answer.
Centerpiece

AI Pricing & Productization — Your POV

What you actually asked at the panel and the thinking underneath it. Two versions of the answer below: diagnostic (when Bryan wants to test how you reason) and prescriptive (when he wants you to take a position). Use diagnostic first; he will almost certainly push you to commit, at which point pivot to prescriptive.

What you actually asked

"I noticed the AI Enrichment flow currently runs on a BYO-API-key model. Has the team considered moving in a direction where AI is bundled into the product price — so customers don't think of TC as a Salesforce-native shell over an OpenAI bill, but as the AI product itself, with the inference being one component?"

The room said it had been considered — "directionally in lock step." Read: this is a real internal question, not a hypothetical. They want to know how you would resolve it.

Direction A — Diagnostic

"Here are the dimensions before the answer"

Use when: Bryan opens broadly ("what made you ask that?") or you want to demonstrate range before committing. Buys time to read the room and signals you understand this isn't a one-axis decision.

Opener: "Pricing AI products is a multi-axis decision — the right answer depends on segment, sales motion, cost structure, and what you're actually selling. Before I give you a direction, here's how I'd structure the decision space."

AxisThe tradeoff
1. Cost-pass-through vs. value-based BYO-key is pass-through: customer pays exactly what they consume. Bundled is value-based: charge for outcome, not inference. Pass-through is easier to defend ("we're not marking up"); value-based captures more margin if the value scaffold is real.
2. AI as feature vs. AI as product If AI is part of how the existing product works, pricing absorbs it. If AI is the thing being sold, pricing reflects it. Feature is invisible (low friction, low premium); product is visible (high trust requirement, high willingness-to-pay if trust is earned).
3. Cost volatility absorption BYO-key: customer absorbs all volatility (token prices, model availability). Bundled: TC absorbs it. Margin compresses if frontier-model costs spike. Mitigations: contractual price-adjustment clauses, model-agnostic abstraction, longer-term capacity contracts.
4. Segment fit SMB: cost-sensitive, no AI procurement infra, no existing API contracts → bundled is an onboarding accelerator. Mid-market: mixed → hybrid reduces friction. Enterprise: most have AI procurement processes and existing model contracts → BYO-key respects existing investments.
5. Procurement & sales motion BYO-key: faster pilot start, but customer asks "what am I paying TC for?" downstream. Bundled: longer pilot conversation (defend the markup) but cleaner ROI story.
6. Moat exposure BYO-key positions TC as plumbing — customer can swap providers without changing TC. Bundled positions TC as the AI provider in the customer's eyes — switching cost increases. Risk: regulated customers may require "I picked the model," making BYO-key non-negotiable.
Move End the diagnostic answer with: "That's the decision space. The reason I asked the question is I think you can hold a clear position once you decide which axis you're optimizing for. Want me to give you mine?" — this hands the lead back to Bryan and signals you're not hedging, you're sequencing.
Direction B — Prescriptive

"If I were the PM on Discover, here's where I'd push"

Use when: Bryan asks for your recommendation directly, or after you've laid out the diagnostic and he says "OK, what would you do?" Don't soften the position — he wants to see if you'll commit.

Headline:

"Hybrid pricing, weighted toward bundled, with BYO-key as the enterprise compliance escape hatch. The repositioning that matters most isn't the price model — it's that TC stops selling inference and starts selling the trust scaffold around it."

Recommendation in five moves

MoveWhat it does
1. SMB + mid-market: bundled by default Three tiers (Fast / Balanced / Comprehensive) baked into product price. Volume discounts + Salesforce-native UX justify the markup. Cost calculator flips into an SMB acquisition lever — "we tell you what runs cost; Apex doesn't."
2. Enterprise: tiered options Bundled (default) for the convenience buyers. BYO-key compliance lane for procurement-bound enterprises. White-glove tier with model selection + dedicated capacity for the high-spend FS / public-co segment.
3. Trust scaffold as the differentiated SKU layer D&B integration + audit trail + memory loop = "AI trust layer" priced separately, ladders into mid and enterprise tiers. This is where margin lives long-term. Inference is a commodity; trust is the procurement gate.
4. Sequencing Phase 1 (now): keep BYO-key, introduce bundled as opt-in pilot for SMB; cost calculator as the switch incentive. Phase 2 (6–12 mo): bundled becomes default for new SMB/mid customers. Phase 3 (12–18 mo): BYO-key persists only as an enterprise option.
5. Reframe the "why not OpenAI directly" objection From "we charge a markup" to "we sell a different product." TC isn't selling the inference call — TC is selling the AI + verification scaffold around the call. Trust is the procurement gate for AI products right now, not capability.

Why this direction

What would change my mind

Tone The "what would change my mind" close is the most important part of the prescriptive answer. Bryan is testing whether you hold positions and remain falsifiable. People who recommend without naming the off-ramp tend to dig in when they're wrong — he'll be reading for the difference.
Building Blocks

Three Framings You Can Reach For

These are the sub-arguments behind the prescriptive recommendation. Each works as a standalone answer if Bryan probes a specific dimension.

SMB (<100 employees, no D&B sub): Balanced default, D&B off, memory loop off, cost calculator critical. They need to know the bill before they click. Bundled with predictable cost cap.

Mid-market (100–1,000): Balanced default, Comprehensive for compliance accounts, D&B optional as tier upgrade, calculator useful. Hybrid offering — bundled by default, BYO-key on request.

Enterprise (1,000+, FS / public co / IPO-prep): Comprehensive default, D&B required, memory loop active, calculator optional (FinOps elsewhere). Tiered options including white-glove and BYO-key compliance lane.

Why useful: if Bryan asks "but enterprise customers will push back on bundled," this gives you a structured response — tier the offering, don't pick one for everyone.
Three answers in order of strength:

1. Volume discounts. TC's aggregate spend gives them better $/MTok than any single mid-market customer would get direct.
2. Salesforce-native pipeline. The LLM call is small. Dedupe, normalization, classification, write-back, audit log, review queue — that's the product.
3. Captured record + memory loop + D&B integration. Stage 2/3 value compounds over time; calling OpenAI directly never builds it.

Why useful: the strongest defense of bundled pricing. If Bryan probes "what's the moat?" or "what stops customers from going direct?" — this is the answer, ordered.
End users — including sophisticated RevOps people — operate on a deterministic mental model. "Same input → same output." That model is correct for every other tool they use, and completely wrong for LLMs.

Educating users out of the deterministic model is hard. Pairing with a deterministic source does the work for you. D&B isn't replacing the LLM — it's giving the customer a stable reference point their existing mental model can latch onto. Green if TC and D&B agree, amber if D&B doesn't have the record, red if they disagree. That's not theater; that's a trust mechanism that works without AI literacy.

Why useful: reframes D&B from "extra cost line item" to "the thing customers are actually buying." Pairs with the pricing recommendation: trust scaffold is the SKU layer where margin lives.
Probes

Likely Bryan Questions on This Thread

If he's actually pulling on the bundled-pricing thread, here are the probes most likely to surface, with the sharpest version of your answer.

"Walk me through how you'd actually sequence this — what's first?"
Answer: Phased — keep BYO today, introduce bundled as opt-in SMB pilot with the cost calculator as switch incentive, default-flip in 6–12 months for new customers, leave BYO as enterprise compliance lane long-term. Why phased: "you can't rip and replace pricing without breaking trust with customers who built around BYO. Phasing lets you collect actual switch data before betting the company on the new model."
"How do you handle LLM cost volatility in a bundled model?"
Three lines of defense: (1) contractual price-adjustment clauses tied to a published index of frontier-model pricing; (2) model-agnostic abstraction so you can route to whichever provider has the best $/MTok this quarter; (3) margin lives in the trust scaffold (D&B verification, memory loop, audit trail), not inference — so cost spikes hit a smaller share of revenue. The deeper move: "if your margin depends on inference cost staying low, you're a token-arbitrage business. The bundled-pricing direction is what lets you stop being that."
"What about customers who already have OpenAI / Anthropic contracts?"
Answer: "BYO-key remains the enterprise compliance lane for exactly that reason — you don't strand customers with existing AI investments. But that's a smaller share of the segment than people assume; most mid-market and SMB customers don't have AI procurement processes yet. Bundled is an onboarding accelerator for everyone except the BYO-bound enterprise tier."
"What's the moat if AI is just an API call?"
Sharpest answer: "TC isn't selling the call. TC is selling the AI plus the verification scaffold around the call. Trust is the procurement gate for AI products right now — not capability. The moat is captured records, memory loop, D&B as deterministic anchor, and Salesforce-native delivery. Replace TC with raw OpenAI and you lose all of that, regardless of which model is underneath." Pairs with Framing 2 (the three-pack).
"Per-record? Per-seat? Consumption-based? Hybrid?"
Lean hybrid: per-seat base for predictability (SMB and mid-market budget love this) + per-record consumption above an included threshold + tier upgrades for D&B / memory loop / audit trail. Why: "pure consumption is the OpenAI model and customers conflate you with that. Pure per-seat caps your upside on heavy-usage accounts. Hybrid gives you the predictability story and the upside story without forcing one segment to subsidize the other."
"Who internally pushes back on this, and on what?"
Don't pretend to know TC's politics. Reframe: "I don't know your internal camps yet, but I'd anticipate three sources of resistance — (1) anyone responsible for gross margin who sees inference cost volatility as an existential risk; (2) anyone close to the enterprise sales motion who's seen procurement-bound customers walk away from bundled; (3) anyone who's invested in the BYO-key implementation who'd own the migration cost. Those are the three conversations I'd want to have first if I joined."
"Is data quality really a painkiller, or still a vitamin?"
Reframe via trust: "Data quality alone is a vitamin — chronic pain, easy to deprioritize. But AI trust is a painkiller. The acute version of the pain isn't 'my CRM has dirty data,' it's 'my reps are spending half their time second-guessing AI outputs and burning customer interactions.' That's the urgency. The bundled trust-scaffold positioning sells against that pain, not the chronic dirty-data one."
"Is this an AI feature or an AI product?"
Answer: "Today it's a feature with API-key admin overhead — AI happens inside Discover but customers think of it as 'the OpenAI part of TC.' The bundled direction is the move from feature to product. That repositioning is what makes the pricing question downstream of the product question. You're not pricing AI; you're pricing the trust scaffold."
"What did you mean by 'directionality' when you asked the question?"
Honest answer: "I was asking whether the team had considered repositioning — not just adjusting price points within the BYO-key model, but stepping back and asking whether TC is currently selling the right thing. The pricing change is the visible artifact; the product-positioning change is what would actually drive it."
Discipline You won't get all eight. You'll get two or three. Don't try to bank the rest as preamble — answer the question that's in front of you, then stop. Bryan will probe further if he wants to.
Deck Probes

If Bryan Probes the Deck Directly

A gap in the demo: you didn't name metrics or success criteria for any of the three pillars. "How would you know it's working?" is the kind of question PMs are expected to answer reflexively — and it's squarely in the "non-technical product thinking" probe space Karen flagged. Below is the framework to reach for, plus the next-most-likely deck-level probes.

1. Metrics & Measurement

If Bryan opens with "how would you measure success on this?" or "what KPIs would you watch?" — anchor the answer in a single north-star outcome, then ladder back through the three pillars.

The north-star outcome

"Customers stop asking 'can I trust this classification?'" — measured both inbound (support tickets, deal-review escalations, audit asks) and at renewal (churn-risk conversations citing AI distrust). Everything below is a leading indicator that ladders into this lagging signal.

Per-pillar metrics

PillarLeading indicatorsLagging indicators
1. Raise the floor (smart defaults) • Preset adoption (Limited / Balanced / Extended) vs. custom config
• Pre-run cost-estimate accuracy (forecast vs. actual ±%)
• Time-to-first-successful-run for new admins
• % of runs hitting / overshooting the soft cap
• Reduction in "surprise bill" support tickets
• Re-configuration frequency
• Admin self-reported confidence (config-flow CSAT)
2. Capture, store, surface events • % of records with complete audit trail
• Audit export volume + frequency
• Mean time-to-validate a flagged classification (search → answer)
• Reduction in escalated "why was this classified this way" tickets
• Compliance walkthrough completion time vs. manual reconstruction
• Renewal-cycle audit-readiness signal from Francesca-persona accounts
3. Cross-reference • % of records in verified / mismatched / unconfirmed states
• Mismatch resolution rate (flagged → corrected)
• Time from mismatch flag to human review
• Reduction in customer-reported "wrong industry / wrong field" tickets
• Lift in deal velocity for cross-referenced accounts (AE confidence proxy)
• Net retention attributable to Discover, segmented by trust-scaffold usage
Move Name the leading-vs-lagging distinction explicitly: "Leading metrics tell you the surface change is working — adoption, accuracy, time-to-validate. Lagging metrics tell you the underlying pain is actually receding — fewer trust questions inbound, audits getting faster, retention holding." Naming the distinction is the non-technical PM signal.

What you'd watch on day 1

What you'd not over-index on

2. Other Likely Deck Probes

"Why three presets and not two or five?"
Answer: Three is the smallest set that maps cleanly onto the volume × stakes matrix without forcing users to interpret edge cases. Two collapses the matrix into a binary that misrepresents real usage; five+ recreates the choice-paralysis problem you're solving. Falsifiable: if usage data shows 80%+ of customers picking the same preset, the matrix is wrong and presets should collapse.
"What if the customer doesn't have D&B?"
Answer: D&B is one instance of a deterministic anchor — the principle is "pair LLM output with a vetted reference dataset," not "must be D&B specifically." Alternatives by segment: ZoomInfo, Crunchbase, the customer's own master data, Salesforce-native clean rooms. The architectural move is the cross-reference pattern; the data source is configurable.
"How does this hold up at 100K+ records?"
Answer: At that volume "Consider batching" in the matrix becomes a hard requirement. Batch processing + stratified-sample audit (don't audit every record) + asynchronous mismatch surfacing in a review queue rather than blocking the run. The audit-trail principle holds; the UX of how it's surfaced needs to scale.
"What's the MVP if you had to ship in 4 weeks?"
Answer: Pillar 2 (events log) ships first — it's the lowest-risk, highest-leverage piece. Foundation for the other two, unlocks compliance use cases immediately, doesn't require UX changes to existing flows. Pillars 1 and 3 layer on once the event substrate exists. Why this order: you can't fix what you can't see; without audit trail, smart defaults and cross-ref are harder to debug and improve.
"How would you validate this with customers before building?"
Answer: Two lanes. (1) Existing Discover customers — pull support tickets where "trust" or "accuracy" was mentioned, call those accounts. (2) Compliance officers in target enterprise accounts (Francesca persona) — walk them through the audit-export mock to see if it would actually satisfy a regulator. Fastest signal is showing, not asking.
"What's the biggest risk in this approach?"
Answer: The cross-reference pillar creates a new failure mode — when the reference source is itself wrong, the mismatch flag undermines trust in correctly-classified records. Mitigation: surface confidence on the cross-ref source too, not just the LLM. The trust scaffold has to be honest about its own reliability, or it just shifts the trust deficit one layer down.
Connection Metrics pair naturally with the bundled-pricing thread: "if trust scaffold is where the moat lives, the metrics that prove it's working are the same numbers that justify the SKU. Preset adoption, time-to-validate, mismatch-resolution rate — those are also the sales-collateral numbers." Use this bridge if Bryan asks how the deck connects to your closing question.
Ask Bryan

Questions to Bring

Reshaped to invite the internal debate. The first three are designed to land if the conversation has been on the bundled-pricing thread; the rest work as backups.

1. What are the camps internally on this question — and where are you personally?
Direct invitation into the actual debate. Asking where Bryan personally lands signals you want to know him as a thinker, not just as a CPO with org-aligned talking points. His answer also tells you whether this role would have a meaningful voice in the decision.
2. What is customer feedback telling you about appetite for bundling? Is anyone asking for it, or is this a "we see the direction; the customers don't yet" call?
Distinguishes two very different strategic positions. If customers are asking, this is reading demand. If they're not, this is leading the market — which is a riskier and more interesting bet. Both are valid; you want to know which one you'd be joining.
3. What would have to be true for you to make this move in the next 12 months?
Forces specificity on the constraints. His answer reveals what's blocking the decision — cost data, customer signal, internal alignment, technical readiness, exec air cover. Whichever he names first is the real bottleneck.
4. How does the team handle internal disagreement — what's the norm when Cynthia, Garren, Scott, and you don't agree?
Probes the working culture broadly. Useful if conversation drifts off the pricing thread and you want to understand how decisions get made. Pairs naturally with question 1 if he opens up about camps.
5. Where do you see Discover sitting on the AI-feature-vs-AI-product spectrum two years out?
Forward-looking strategic question. Lets him paint the long-game vision, which is something product-led CEOs/CPOs usually enjoy. Also reveals whether the team thinks of Discover as an extension of the existing suite or a category bet.
6. From the panel last week — was there anything you wish I'd handled differently, or anything the team is still circling on?
Direct ask for feedback. Risk: he may surface the OpenAI-key thread. If he does, you have a clean response: "That's fair — the lesson I took is solo-grinding through a blocker reads as independence to me, but it reads as opacity to a team. I'd default differently." Use only if you want to invite that conversation.
Sequencing Question 1 is the strongest opener if he's been on the pricing thread — it converts the interview into a peer conversation. Save 4 (disagreement norms) and 6 (panel feedback) for later when rapport is highest.
Universal

Talking Points That Land Regardless

If the conversation drifts away from pricing, these are the things you'd still want Bryan walking away with. Drop in naturally.

Light version: "I felt good coming out of it. The thing I'd sharpen is being faster to clarify when a question lands ambiguously — Ernesto asked about a threshold and I picked one to answer; I'd rather have asked him to pick the lane up front." Self-aware without re-litigating the OpenAI-key thread.
"When I started on the DX/Website team at Ting, there wasn't a UX research function. So I called customers myself — sourced through support tickets, the longest complainers, recent signups, anyone with an interesting question. That habit stuck. Even after the research team built up, I never stopped picking up the phone." Echoes Bryan's own "just pick up the phone" framing from Apr 9.
"My instinct is to over-communicate early — figure out who owns the 'what,' who owns the 'how,' and where the shared seams are. I'd rather spend the first month understanding how they already operate together than show up with a proposal for how things should work." Counters the "Senior PM energy" disruption concern.
"Three things. The panel itself — the texture of disagreement felt like the kind of work I want to be doing. A small product team where one person matters. And the AI direction is real and I want to be hands-on with it, not adjacent to it."
Callbacks

Demo Panel Reference Points

Specific moments from Apr 30 that connect to the pricing thread. Use sparingly — one or two are powerful, more starts to feel like rehearsal.

MomentHow to use it
Worth-vs-cost reframe (Scott) Scott asked you to position pricing as expensive or cheap; you reframed around worth, not absolute cost. Your bundled-pricing closing question is the same move at the strategic level — reframing what TC is selling, so the price is downstream of the positioning. Pair these explicitly if Bryan probes the connection.
Essence-vs-implementation (Ernesto) "The directionality and fundamental flavor wouldn't change — more resources buys you 'right from the start' instead of 'ship now, refactor later.'" Same move applies to the pricing question: the recommendation (bundled, trust scaffold as SKU layer) doesn't change with resourcing; sequencing does.
Reputational vs. actual reliability (D&B) You split a household-name shortcut into two distinct claims. Pairs directly with Framing 3 above — D&B's value isn't its brand, it's its determinism as an anchor for non-AI-fluent users.
Bundled-pricing closing question The thread you're now expanding on. If Bryan opens with "I wanted to follow up on what you asked at the end…" — that confirms the spine and you go straight to diagnostic.
Threshold-clarification miss (Ernesto) Honest "what I'd sharpen" answer if Bryan asks. Concrete, small, real — not performative humility.
Backup

If the Conversation Goes Broader

If Bryan doesn't pull on the pricing thread, these are the generic non-technical PM probes most likely to surface. One story per answer.

Tell me about a time you disagreed with a teammate or stakeholder.
Reach for: the worth-vs-cost reframe in the panel (if not used), or self-scheduling at Ting where the org valued phone calls and you made the case for phasing into self-service.
How do you handle pushback from sales / CS / customers wanting feature X right now?
Reach for: Ting customer migration — multiple acquired companies, multiple CRMs, every team had urgent asks. Name your prioritization frame plainly.
How do you build trust with a new team?
Reach for: joining LalaMove at ~50 people, observing before proposing, learning existing patterns first.
Tell me about a customer interaction that changed your thinking.
Reach for: address serviceability fix at Ting — a customer with wrong-expectations from the radius hack, which led you to dig in even though it wasn't your job.
How do you communicate with non-technical stakeholders?
Reach for: Tucows AI Council policy work, or BI dashboards at Ting. "I default to plain language and analogies, even when the audience is technical — jargon obscures more than it reveals."
What does success look like for you in the first 90 days?
Reach for: mostly listening. "First 30: meet the team, sit in on customer calls, read every PRD from the last six months, understand seams between products. First 60: form a POV on the product I'd own, pressure-test it. First 90: first non-trivial roadmap recommendation with the team's input."
Logistics

If Start Date / Hungary Comes Up

We're in early May, when the timeline was projected to reach offer stage. If Bryan transitions to logistics — start date, availability, "when could you join" — this is the moment.

Read If logistics come up at all, that's a positive signal — people don't ask scheduling questions of candidates they're rejecting.
Watch

Patterns to Avoid

Pre-Call Checklist

One-Liners