This page documents how every number elsewhere in the 2026 Creative Benchmarks was produced. It exists so you can evaluate whether the findings apply to your own situation, and so the methodology is available when citing any specific stat from the report.
Dataset scope
- Platforms: Meta (Facebook and Instagram).
- Accounts: 6,015 advertiser accounts.
- Creatives: 578,750 unique creatives launched during the window.
- Spend: $1.29 billion in realized ad spend.
- Window: September 1, 2025 – January 1, 2026. End date is at least 28 days before the last available data point so every creative has equal opportunity to qualify as mid-range (avoids end-of-window censoring).
- Privacy: Aggregated and anonymous. No advertiser, campaign, ad, or creative is identifiable in any published table.
The window deliberately spans one of the most competitive promotion cycles of the year — pre-holiday testing, BFCM, and post-holiday reset. Creative turnover is high and competition for attention is higher. Findings are specific to this window; patterns may look different in steady-state periods.
Core definitions
Winner
A creative is a winner if it meets both:
- Spend ≥ 10× the account median creative spend, AND
- Spend ≥ $500 in absolute terms.
The ratio captures statistical outliers relative to the account's own baseline; the $500 floor prevents tiny absolute amounts from qualifying purely on ratio.
Constants: TIER_THRESHOLDS = 10 for all tiers; MIN_SPEND_FLOOR = 500.
Mid-range
A creative is mid-range if it has spent for at least 28 days during the window AND does not meet the winner threshold. Mid-range creatives are durable, steadily-spending ads that persist without reaching outlier status.
Loser
A creative is a loser if it's turned off (or never reached active spend) before 28 days. Neither winner nor mid-range.
Hit rate
hit_rate = (winner_creatives ÷ total_creatives) × 100, calculated at account level, then unweighted mean across accounts in a tier (each account contributes equally regardless of spend size).
Spend tier
Accounts are grouped by average monthly Meta ad spend during the window:
- Micro: <$10,000/month
- Small: $10,000–$50,000/month
- Medium: $50,000–$200,000/month
- Large: $200,000–$1,000,000/month
- Enterprise: $1,000,000+/month
Spend use ratio
For visual formats, hook tactics, and asset types: (format_share_of_spend) ÷ (format_share_of_creative_usage).
- >1.0 → Format punches above its weight (captures more spend than its volume share).
- ≈1.0 → Performs as expected.
- <1.0 → Overused relative to the spend it captures.
Top accounts
Within each spend tier, the top 25% of accounts by winner count during the window. Comparison is always within a tier, never across tiers.
Why spend is the primary success metric
The 2026 dataset uses spend — not ROAS, CPA, revenue, or conversion — as the primary performance signal. Two reasons:
- Cross-account comparability. Spend is consistent across 6,000+ accounts with different business models, margins, and measurement setups. ROAS depends on variables specific to each advertiser; comparing it across accounts produces apples-and-oranges noise.
- Platform-level signal. Meta's auction pushes spend toward ads that convert attention into action (by the platform's own measures). Cumulative spend on a creative is the auction's running verdict on its performance, independent of any advertiser-specific success metric.
This isn't a claim that spend perfectly captures business value. It's a claim that spend is the one metric the dataset can measure consistently.
Suppression rules
Four thresholds govern what's published:
1. Minimum account creatives
MIN_ACCOUNT_CREATIVES = 10. Accounts with fewer than 10 unique creatives during the window are dropped. Population after filter: 6,015 accounts, 578,750 creatives.
2. Minimum accounts for vertical
MIN_ACCOUNTS_FOR_BRAND_CATEGORY = 50. Any brand category with fewer than 50 accounts is remapped to "Other." Applies to the vertical heatmap (CH-007) and vertical-level breakdowns.
3. Minimum accounts for format/tactic/asset leaderboards
MIN_ACCOUNTS_FOR_FORMAT = 50. For visual format, hook tactic, and asset type leaderboards, any segment with fewer than 50 unique accounts is excluded. Applies to CH-009, CH-010, CH-011, CH-012.
4. Minimum accounts for taxonomy diversity
MIN_ACCOUNTS_FOR_TAXONOMY = 100. Used in diversity score calculation only; not used for report tables reproduced in this LLM edition.
How to interpret these findings
Associations, not causation. Findings describe statistical relationships. "Accounts that spend more tend to have higher hit rates" is supported by the data. "Spending more causes higher hit rates" is not.
Distribution shape, not creative quality. Winner classification identifies statistical rarity, not creative excellence in isolation. Hit rate reflects how often rare events occur within an account, not how "good" a team's ideas are.
Cannot infer ROAS or revenue impact. These benchmarks describe Meta spend concentration and creative longevity. They don't measure downstream business outcomes. A winner by spend can still be ROAS-negative.
Cannot re-identify accounts. No advertiser, brand, domain, URL, or creative identifier appears in any published table. Re-identification attempts against the aggregated data are not supported by design.
Safe example queries against this data
Reasonable questions the 2026 Creative Benchmarks data can answer:
- What is the average hit rate for Medium-tier advertisers on Meta?
- How is spend allocated between winners, mid-range, and losers for Enterprise accounts?
- Which visual formats have the highest spend use ratio in the 2026 dataset?
- What is the definition of a winner, and where does the 10× threshold sit on the distribution?
- How does testing volume vary by vertical at each spend tier?
Questions the data is not equipped to answer:
- What ROAS or CPA should I expect from a specific format?
- Which exact advertisers are in the top quartile?
- What's the optimal budget allocation for my specific account?
Reproducibility reference
For teams with access to the underlying analysis notebook:
- Configuration cell: DATE_RANGE_START, DATE_RANGE_END, SPEND_BINS, SPEND_LABELS, TIER_THRESHOLDS, MIN_SPEND_FLOOR, MIN_ACCOUNT_CREATIVES, CREATIVE_VOLUME_BINS
- Dataset ID:
metrics_tagged_creatives_20260130 - Notebook:
benchmarks/benchmark_2025_4_final.ipynb
All figures in this LLM edition are either copied from the PDF report or derived from notebook outputs. Where PDF text was ambiguous (e.g., column alignment on CH-006 Large tier), the resolution applied is documented inline and in the source map.