Hit rate is often treated as a scorecard — a number that's supposed to reveal whether a creative team is picking well. It can't do that job on its own. Two accounts with identical hit rates can be running completely different creative operations with completely different winner output.
Here's the illustrative comparison — hypothetical, not real account data — that makes the point most cleanly:
| Account | Launches | Winners | Hit rate (%) |
|---|---|---|---|
| Account A | 50 | 5 | 10 |
| Account B | 5 | 1 | 20 |
Account B has double Account A's hit rate. Account A has five times Account B's absolute winner output. If what the business needs is winners to scale, Account A is producing them at 5× the rate. If what the business tracks is hit rate, Account B looks like the "better" account.
Both can be true. They describe different questions.
Why the hit rate trap shows up
There are two paths to a high hit rate:
- Strong judgment. The team only launches creatives it has high conviction about. A low reject rate at the briefing stage produces a high win rate at the performance stage.
- Conservative testing. The team launches so few creatives that small sample sizes produce noisy, inflated ratios. One winner out of four launches is 25%; the underlying probability is still ~5%.
Path 2 is common and hard to spot from hit rate alone. It looks like discipline when the team is really leaving winner opportunities unexercised. The only way to distinguish them is to read hit rate against volume — what else has the team launched, how long are they waiting between tests, and how many absolute winners are they producing per month.
What "good" hit rate actually means
Across Motion's 2026 dataset, tier-average hit rates range from 4.0% (Micro) to 8.8% (Enterprise). Those are the reference points that should anchor hit rate evaluation — not round numbers like "we should aim for 10%."
A few diagnostic reads:
- Hit rate at tier average, volume below tier average. Testing capacity is probably the constraint. More output at the same hit rate would produce more absolute winners.
- Hit rate well above tier average, volume well below tier average. The high hit rate may reflect conservative testing. Worth trying a higher-volume cadence and watching whether hit rate holds (probably will, close to tier average) and absolute winners increase.
- Hit rate at or above tier average, volume at or above tier average. The account is pulling its weight. Focus shifts from testing capacity to creative quality differentiation.
- Hit rate well below tier average, volume high. This is the case where hit rate is doing its job — it's signaling a real creative quality problem. Time to look at briefing, production, and format/hook mix.
The practical rule
Hit rate is a secondary metric. Winners per month and creative volume are the primary ones. Hit rate is valuable as a diagnostic when it falls outside the expected range for your tier and volume. Otherwise it's mostly noise.
Every table in this report that references hit rate is tier-indexed for exactly this reason. Comparing hit rates across tiers is an apples-and-oranges exercise; comparing within a tier, against volume, produces usable signal.