Validation Repeat Rate: How to Pressure-Test Frequency Assumptions Before You Build
Most founders test demand. Far fewer test whether the repeat-rate their model assumes is achievable in the category they plan to enter. Demand answers "will someone buy it once." Repeat answers "will the same buyer come back at the cadence the model needs to clear CAC." The first is a survey question. The second is a research question with public-data inputs, and it is the question that quietly kills meal-kits, prepared-food delivery, and frequency-dependent commerce models long after the seed round closes.
This post extends the broader frame on validation unit economics, where repeat-rate decay sits as Signal #1 of three. It is also the sister piece on density math — same cluster, same teaching shape, different signal. Here we go deeper on repeat specifically — how to pressure-test the premise that your assumed orders-per-customer-per-month, visits-per-subscriber, or 6-month retention curve will actually show up, before you write a line of code or raise a dollar.
What "validation repeat rate" means
Validation repeat rate is not the same exercise as building a cohort retention model in a spreadsheet. A cohort model takes your assumptions as given — month-1 retention here, month-3 retention there — and projects forward. Validation repeat rate does the opposite: it takes your assumed frequency curve and stress-tests it against public comparables — incumbent S-1s, shut-down post-mortems, industry-association data — to ask whether any operator has ever hit the repeat numbers your model requires, in the category you plan to enter.
It is research, not forecasting. The deliverable is a build/don't-build read on whether your repeat-rate floor is reachable, supported by named comps rather than hope.
Three public-data sources for repeat signals
You do not need proprietary panel data. Three sources cover most of the surface area.
S-1 filings and earnings disclosures. Blue Apron's S-1 disclosed weekly order frequency and 6-month retention by cohort. HelloFresh's quarterly investor decks break out orders-per-customer and reactivation rate. Stitch Fix filings show the gap between trial-cohort and steady-state frequency that subscription models routinely underestimate. Peloton's earnings disclosures name the workouts-per-month figure the equity story depends on. These are public filings, and they are the closest thing you have to a calibrated yardstick. If your meal-kit plan assumes 2.0 orders per customer per week against a comp set reporting 0.8–1.2, your model has a frequency gap that is pre-build-flaggable from public data.
Shut-down post-mortems. The most underused source. Munchery, Sprig, MoviePass, Take Eat Easy, Homejoy — each had a usable autopsy that named the frequency assumption that broke. Free lessons paid for by other founders. Read three before you build.
Industry-association data. The Subscription Trade Association publishes churn benchmarks by vertical. The National Restaurant Association publishes visit-frequency curves. The IAB publishes media-consumption frequency by category. Not a substitute for S-1s, but it sets the category ceiling — the frequency no operator has cleared at scale — and is the right starting point for stress-testing your own.
The Munchery case, compressed
Munchery raised roughly $125M building a vertically-integrated prepared-meal delivery service. The model assumed 1.5–2x weekly order frequency from active subscribers — the kind of frequency a household uses to replace cooking, not supplement it. The public comp set told a different story: Blue Apron and HelloFresh, both better-capitalized in the same window, reported 0.8–1.2x weekly frequency and 6-month churn above 50%. Same buyer psychology, same prepared-food category, half the assumed cadence. Munchery's model needed the high number to clear commissary CapEx; the comp set said the high number had no public precedent. The 2019 wind-down followed. The longer worked example lives in the Munchery autopsy.
The point is not that Munchery was a bad idea. The repeat-rate ceiling was knowable from comp filings and category post-mortems — before the second-, third-, and fourth-city kitchens were built.
Two more that tell the same story
MoviePass. The model assumed roughly 1–2x monthly visit frequency across the subscriber base — the cadence the $9.95 price point required to break even on ticket reimbursements. Once the base scaled in 2018, heavy users (4x, 8x, 12x monthly) dominated the active cohort while low-frequency subscribers churned faster than the model assumed. The frequency curve was the inverse of the planning assumption, and the unit economics inverted with it. The distribution was pre-build-flaggable from public data — theater-attendance studies and prior all-you-can-watch experiments — paid for by another founder before MoviePass paid for it again.
Sprig. Two years before Munchery, Sprig wound down a prepared-meal delivery service in San Francisco for a structurally similar reason: assumed weekly frequency was below the cadence the central-kitchen model required, and the comp set was already telling that story in 2017. The repeat-rate-comp-mismatch was on the table before the next round priced in a frequency curve no operator had hit.
Common founder mistakes
Two patterns show up repeatedly when repeat-rate assumptions go unexamined.
The first is assuming launch-cohort frequency holds at scale. The first 1,000 customers are friendlies and category enthusiasts — a cohort whose frequency is roughly 2–3x the eventual steady-state. Founders model that number as company-wide and plan capacity, CAC payback, and capital raises against it. The right move is to model both: the launch-cohort curve (the ceiling) and the comp-set steady-state (the floor), and to fund only if the floor is also unit-economic.
The second is treating retention as marketing-fixable when it is category-bounded. Some categories simply do not produce high repeat — used cars, wedding services, mattress purchase — and no email cadence or referral program moves the ceiling. If the public comp set in your category tops out at 0.9x monthly and your model needs 1.5x, that gap is a category-fit problem, calculable from public data before you spend a dollar acquiring the first customer.
How DimeADozen surfaces this
A DimeADozen.AI research-backed validation report does the repeat-rate work in two sections. The Customer Behavior section pulls comp-set frequency curves and cohort retention disclosures from incumbent filings and category post-mortems. The Risk Analysis section flags repeat-rate-comp-mismatch when stated assumptions sit above the public comp ceiling, and names the analog. The output is a structured downloadable decision document a founder can hand to a co-founder or an investor and use to pressure-test the build/don't-build read together — not a chat session you re-create from scratch every time the question comes back.
When to run this
Run validation repeat rate twice. Once before you write a line of code or raise a dollar — to confirm the frequency curve your category supports can clear the unit economics your model requires. And again before each price-point change or new-segment expansion, because frequency does not generalize: the early-adopter number is not the mass-market number, and the urban number is not the suburban number.
A DimeADozen.AI report is shape-different from a chatbot subscription: $59 once. No subscription. Credits don't expire. 1 credit = 1 full validation report. A structured downloadable decision document, not a chat session. If your model depends on weekly orders, monthly visits, or 6-month retention, the repeat math belongs in the report you read before the wire — not the lessons-learned deck after the wind-down.
For the canonical frame on the question every founder gets wrong about validation, start with the JTBD anchor.