E-commerce Diligence: Stop Underwriting Dashboards

Operating Partners diligencing DTC and e-commerce brands ($10M–$150M) in consumables, replenishables, beauty, and consumer health run into the same problem: the numbers look clean in marketing dashboards, but the base case breaks post-close.

The most important diligence question isn’t “Is marketing good?” It’s this:

Are the unit economics real, attributable, and durable enough to underwrite the base case—without heroic assumptions?

If you’ve seen enough deals, you know the pattern:

  • CAC looks fine until you reconcile spend, discounts, returns, and attribution.

  • Retention looks strong until promos cool off.

  • “Profitable paid search” is mostly branded demand.

  • Growth depends on one channel, one creative concept, or one platform rule that can change next quarter.

None of this is a moral failing. It’s how e-commerce measurement behaves under pressure.

What matters in PE is whether diligence prices these failure modes.

Why e-commerce base cases fail

In lower mid-market commerce, dashboards create three predictable illusions. If you don’t actively test for them, you’re not underwriting growth—you’re underwriting a story.

Illusion #1: ROAS equals incrementality

A brand can show strong ROAS while mostly harvesting demand that already exists (branded search), getting over-credited via last-click (email, retargeting), or benefiting from view-through assumptions that don’t survive scrutiny.

If the base case assumes dashboard ROAS translates directly into incremental customers, your downside is larger than your model admits.

Illusion #2: Repeat rate equals loyalty

Consumables naturally create repeat behavior—but repeat does not automatically mean profitable retention. Repeat can be driven by discount ladders, affiliate-heavy “deal shoppers,” or subscription programs that are really churn loops (promo win-backs counted as retention).

If repeat behavior collapses when promos reduce, you’re not underwriting loyalty. You’re underwriting a promotional treadmill.

Illusion #3: Growth scales without changing the business

Scaling acquisition changes what you sell, what you discount, and what your unit economics look like. New customers often buy different SKUs, margins compress as targeting broadens, returns rise with certain offers/channels, and fulfillment costs shift as volume and 3PL performance change.

If diligence treats margins as stable and focuses only on “more spend → more customers,” it’s missing the actual underwriting problem.

The stance: Underwrite reality, not attribution

You don’t need more KPIs. You need a structure that connects:

Marketing data → customer economics → revenue durability → EBITDA pathway.

That structure is a KPI tree.

But it only works if you use it like an underwriting tool—meaning it must answer two questions:

  • Is performance real? (measurement integrity and reconciliation)

  • Is performance durable? (cohorts, margin, and channel fragility)

The KPI tree that survives reality

This tree maps cleanly to model levers and forces definitions you can defend in IC.

1) Demand creation efficiency

Start with CAC, but force it into an IC-defensible definition.

What to calculate

  • Blended CAC: (total marketing + sales expense) ÷ new customers

  • Incremental CAC (directional): (incremental spend) ÷ (incremental new customers)

  • CAC by channel: Search (brand vs non-brand), Social, Affiliates/Influencers, Marketplaces

Then convert CAC into what the model actually needs: payback on margin dollars.

A payback definition you can defend

Payback (months) = CAC ÷ monthly contribution margin dollars generated by the acquired cohort

Why contribution margin? Because this is where deals get mispriced. Gross margin ignores fulfillment reality. Revenue ignores discounts and returns. “ROAS payback” ignores everything.

2) Demand quality and durability

Most diligence teams treat retention as a single number. That’s a mistake. You want the shape of the cohort curve and what changes it.

What to analyze

  • Cohort retention at 30/60/90/180/365 days

  • Retention segmented by acquisition source and discount depth

  • Contribution margin by cohort (not just gross margin)

  • Returns/refunds by channel, SKU, and offer

Where OPs should be skeptical

If the only profitable cohorts were created under heavy discounting, the business might be selling value by the pound. That can be viable—but it should be priced as a lower-quality cash flow stream.

3) Conversion mechanics

This is where non-heroic upside lives.

If you’re buying a real conversion engine, you should be able to describe it precisely:

  • Which landing pages convert and why

  • Where the funnel leaks (PDP → cart → checkout)

  • Whether mobile performance is structurally weak

  • How merchandising mix affects margin for new customers

A bias to watch for: teams often treat CRO as guaranteed because it’s “in our control.” In practice, CRO is a throughput game. If the org can’t ship tests, CRO doesn’t happen.

4) Measurement integrity

Perfect measurement doesn’t exist. The goal is decision-grade measurement.

You should be able to answer:

  • Can we reconcile platform orders/revenue to finance-recognized revenue?

  • When did tracking change, and what did it break?

  • How sensitive is “performance” to attribution method (last-click vs data-driven)?

If you can’t defend measurement integrity, widen CAC ranges and reduce confidence in paid efficiency.

The 10 diligence questions that catch most of the risk

These aren’t “marketing questions.” They’re underwriting questions.

  • Do CAC and ROAS reconcile to finance? If marketing dashboards don’t match the financial system of record, assume the dashboard is optimistic. (Red flags: platform revenue materially exceeds finance revenue; ROAS ignores returns/refunds; discounting not reflected.)

  • How much of paid search is branded demand? (Red flags: “profitable search” is mostly branded; non-brand CAC is far above the model.)

  • Is payback calculated on margin dollars—or on revenue? (Red flags: payback calculated on revenue; payback only works during promo-heavy periods.)

  • Is repeat driven by loyalty or by promos? (Red flags: retention collapses with lower discounts; repeat is concentrated in high-discount tiers.)

  • Is there a channel or platform single point of failure? (Red flags: 50%+ of new customers from one platform/campaign type; growth depends on one creative concept.)

  • Does performance survive an attribution sensitivity check? Run last-click vs data-driven, remove view-through, and remove retargeting. (Red flags: ROAS collapses under conservative attribution; email shows implausible first-touch impact.)

  • Is the conversion engine structurally sound? (Red flags: weak mobile conversion with no clear fix; checkout friction; slow site; high funnel drop-off.)

  • Do margins hold as acquisition scales? (Red flags: new customers skew to low-margin SKUs; growth correlates with margin compression.)

  • If subscriptions exist, are they additive—or churn loops? (Red flags: subscriber “growth” is win-backs; discount-driven subscriptions erase contribution margin.)

  • Can you translate findings into model levers IC will accept? If diligence can’t become conservative, defensible inputs, it’s not diligence—it’s marketing commentary.

What IC actually needs (model-ready outputs)

An IC memo doesn’t need 40 metrics. It needs a tight set of defensible inputs with ranges.

Levers that usually hold up

  • New customer growth by channel (adjusted for concentration risk)

  • CAC range and payback range (base/upside/downside)

  • A cohort revenue curve (not a single repeat-rate assumption)

  • Margin impacts from discounting, returns, channel fees, and fulfillment

A clean diligence outcome isn’t “marketing is strong.” It’s:

  • Base case works under conservative attribution

  • Downside is defined (and hedgeable)

  • Upside is tied to executable levers, not vibes

The data request list that keeps diligence tight

Ask for these upfront to avoid the “we’ll get it later” trap:

  • Weekly marketing spend by channel (12–24 months)

  • Weekly orders + gross revenue from the commerce platform (same period)

  • Customer-level export: first order date, UTMs/channel where available, order history, discounts, returns

  • Paid search: branded vs non-brand split + search query report

  • Paid social: campaign + creative export

  • Email/SMS: attributable revenue reporting + methodology + list growth metrics

  • Returns/refunds data + policy history

  • Promo calendar and discount strategy changes

  • Tracking change log (pixel/server-side/analytics migrations)

If a team resists these requests, treat the resistance as information. Diligence friction is often correlated with data fragility.

Want a second set of eyes on a live deal?

If you’re diligencing a DTC/commerce target and want a rapid, model-ready view of:

  • whether cohorts hold under conservative assumptions

  • where CAC is real vs harvested

  • how discounts/returns affect contribution margin

  • which upside levers are executable in the first 100 days

Tadpull can help you pressure-test the numbers and translate them into IC-ready assumptions.

Book a Discovery Call with Tadpull and let us help you build a profitable growth strategy for 2026.