Motion's 2026 dataset groups advertisers into five monthly Meta ad spend tiers. The two most useful per-tier metrics are average weekly testing volume (creatives launched per week) and average hit rate (winners as a percentage of total creatives).
| Spend tier (per month) | Average testing volume (per week) | Average hit rate (%) |
|---|---|---|
| Micro (<$10K) | 2.8 | 4.0 |
| Small ($10K–$50K) | 4.1 | 6.4 |
| Medium ($50K–$200K) | 6.6 | 8.1 |
| Large ($200K–$1M) | 11.2 | 8.6 |
| Enterprise ($1M+) | 18.8 | 8.8 |
Hit rate = (winner creatives ÷ total creatives) × 100 at account level; unweighted mean across accounts in tier.
Reading the table
Two patterns are worth separating:
Volume scales aggressively. Enterprise accounts test roughly 6.7× the weekly volume of Micro accounts (18.8 vs 2.8). The gap widens at every tier transition and compounds with spend.
Hit rate scales modestly. Hit rate roughly doubles from Micro (4.0%) to Enterprise (8.8%). That's real — and it compounds with volume when calculating absolute winner output — but it's nothing like a 6.7× multiplier.
The implication: volume is the dominant lever behind winner production at scale. Hit rate matters, but it can't compensate for limited volume.
Why this matters for benchmarking
The most common way creative teams read their hit rate is against a generic industry benchmark — "our hit rate is 6%, is that good?" That question is hard to answer without tier context. Six percent is normal for Small/Medium advertisers and below tier-average for Large/Enterprise.
The tier-level benchmarks here are a cleaner reference point:
- If you're a Micro advertiser with a 4% hit rate, that's tier-average. Worth investigating whether volume (2.8/week) is the constraint on winner production.
- If you're an Enterprise advertiser with a 4% hit rate, that's substantially below tier-average and probably deserves deeper diagnostic work on creative quality, audience targeting, or account fragmentation.
Top-quartile accounts within each tier
The averages in the table above are for all accounts in each tier. The top 25% of accounts within each tier test considerably more — for Medium, 15.9 creatives per week vs the tier average of 6.6. That 2.4× gap within a single tier is consistent across tiers and is one of the clearest signals in the dataset about what separates the best-performing advertisers from the average.
Methodology notes
- Testing volume = mean creatives per week per account. Only accounts with ≥10 unique creatives during the window are included (
MIN_ACCOUNT_CREATIVES = 10). - Hit rate is unweighted — each account contributes equally to the tier average regardless of spend size.
- Creative counts are unique creatives; duplicates and variations count as distinct when launched as separate ads.
See Methodology for full definitions.