The 10× benchmark is Motion's working definition of a winning Meta ad for 2026. It describes a creative that has pulled at least ten times the account's median creative spend, with a minimum absolute floor. It is deliberately a high bar — and that's what makes it useful as a signal.
The definition, exactly
A creative in the 2026 dataset qualifies as a winner when both conditions are met:
- Its total spend is ≥ 10× the median creative spend for its account.
- Its total spend is ≥ $500 in absolute terms.
The ratio component filters for accounts-relative outliers. The $500 floor prevents tiny absolute spends from qualifying purely on ratio (a creative that spent $40 against a $3 account median is a 13× ratio but not meaningful performance).
All 578,750 creatives in the dataset were evaluated against their own account's median, so "winner" is never an absolute cross-account comparison. A $2,000 spender at a Micro account and a $200,000 spender at an Enterprise account can both be winners; they're judged against their own account's baseline.
Where 10× sits on the distribution
On the full ratio-to-median distribution — every creative's multiplier against its account median — the 10× threshold lands at approximately the 92.3rd percentile. That means about 7.7% of all creatives in the dataset spend at 10× their account median or higher.
That 7.7% gets narrowed further by the $500 floor, which brings the effective hit rate to approximately 5% across the full dataset. The $500 floor matters most at smaller advertiser accounts, where it removes ads that are high-ratio on very low absolute spend.
Why spend, not ROAS or revenue
The 2026 dataset uses spend as the primary success metric, not ROAS, revenue, or conversion. Two reasons:
- Cross-account comparability. ROAS depends on pricing, margins, and revenue attribution that differ by advertiser. Spend is a consistent signal of how Meta's auction system is allocating budget within an account.
- Platform signal independence. Meta's auction pushes spend toward ads that convert attention into measurable action. High spend over time is the auction's own verdict on performance — without needing any advertiser-specific success metric.
This isn't a claim that spend perfectly captures business value. It's a claim that spend is the cleanest shared signal across 6,000+ accounts with different business models.
What the threshold is not
Three things 10× does not mean:
- Not a quality metric. A 10× winner isn't necessarily a better creative than a 7× mid-range spender. It's a higher-performing creative under this account's conditions, at this time, against this audience.
- Not a guarantee of profitability. A winner by spend can still be ROAS-negative. The auction allocates budget based on its own measures of performance, not yours.
- Not universal across accounts. A Micro advertiser's 10× winner at $600 total spend is a different phenomenon from an Enterprise advertiser's 10× winner at $600,000. Both qualify, but the scale context matters.
Planning heuristic
If you want a rough rule for what winner volume to expect:
- Average hit rate: ~5% across all tiers — about 1 in 20 creatives.
- Adjusted for tier: ~3.8% at Micro → ~8.2% at Enterprise — about 1 in 26 at the low end, 1 in 12 at the high end.
An account testing 12 ads per month at tier-average performance will surface roughly half a winner per month on average. Winner cadence is a function of testing volume more than anything else in this data.
See Winning ads are rare for the broader framing, or Methodology for the full definitions and suppression rules.