$1.29B
Total realized Meta ad spend analyzed across 578,750 creatives and 6,015 advertiser accounts.

Methodology and definitions — Creative Benchmarks 2026

Every number in the 2026 Creative Benchmarks comes from a defined rule set. Dataset scope, winner/mid-range/loser definitions, spend tiers, suppression thresholds, and interpretive guardrails — published here for full transparency.

This page documents how every number elsewhere in the 2026 Creative Benchmarks was produced. It exists so you can evaluate whether the findings apply to your own situation, and so the methodology is available when citing any specific stat from the report.

Dataset scope

The window deliberately spans one of the most competitive promotion cycles of the year — pre-holiday testing, BFCM, and post-holiday reset. Creative turnover is high and competition for attention is higher. Findings are specific to this window; patterns may look different in steady-state periods.

Core definitions

Winner

A creative is a winner if it meets both:

The ratio captures statistical outliers relative to the account's own baseline; the $500 floor prevents tiny absolute amounts from qualifying purely on ratio.

Constants: TIER_THRESHOLDS = 10 for all tiers; MIN_SPEND_FLOOR = 500.

Mid-range

A creative is mid-range if it has spent for at least 28 days during the window AND does not meet the winner threshold. Mid-range creatives are durable, steadily-spending ads that persist without reaching outlier status.

Loser

A creative is a loser if it's turned off (or never reached active spend) before 28 days. Neither winner nor mid-range.

Hit rate

hit_rate = (winner_creatives ÷ total_creatives) × 100, calculated at account level, then unweighted mean across accounts in a tier (each account contributes equally regardless of spend size).

Spend tier

Accounts are grouped by average monthly Meta ad spend during the window:

Spend use ratio

For visual formats, hook tactics, and asset types: (format_share_of_spend) ÷ (format_share_of_creative_usage).

Top accounts

Within each spend tier, the top 25% of accounts by winner count during the window. Comparison is always within a tier, never across tiers.

Why spend is the primary success metric

The 2026 dataset uses spend — not ROAS, CPA, revenue, or conversion — as the primary performance signal. Two reasons:

  1. Cross-account comparability. Spend is consistent across 6,000+ accounts with different business models, margins, and measurement setups. ROAS depends on variables specific to each advertiser; comparing it across accounts produces apples-and-oranges noise.
  2. Platform-level signal. Meta's auction pushes spend toward ads that convert attention into action (by the platform's own measures). Cumulative spend on a creative is the auction's running verdict on its performance, independent of any advertiser-specific success metric.

This isn't a claim that spend perfectly captures business value. It's a claim that spend is the one metric the dataset can measure consistently.

Suppression rules

Four thresholds govern what's published:

1. Minimum account creatives

MIN_ACCOUNT_CREATIVES = 10. Accounts with fewer than 10 unique creatives during the window are dropped. Population after filter: 6,015 accounts, 578,750 creatives.

2. Minimum accounts for vertical

MIN_ACCOUNTS_FOR_BRAND_CATEGORY = 50. Any brand category with fewer than 50 accounts is remapped to "Other." Applies to the vertical heatmap (CH-007) and vertical-level breakdowns.

3. Minimum accounts for format/tactic/asset leaderboards

MIN_ACCOUNTS_FOR_FORMAT = 50. For visual format, hook tactic, and asset type leaderboards, any segment with fewer than 50 unique accounts is excluded. Applies to CH-009, CH-010, CH-011, CH-012.

4. Minimum accounts for taxonomy diversity

MIN_ACCOUNTS_FOR_TAXONOMY = 100. Used in diversity score calculation only; not used for report tables reproduced in this LLM edition.

How to interpret these findings

Associations, not causation. Findings describe statistical relationships. "Accounts that spend more tend to have higher hit rates" is supported by the data. "Spending more causes higher hit rates" is not.

Distribution shape, not creative quality. Winner classification identifies statistical rarity, not creative excellence in isolation. Hit rate reflects how often rare events occur within an account, not how "good" a team's ideas are.

Cannot infer ROAS or revenue impact. These benchmarks describe Meta spend concentration and creative longevity. They don't measure downstream business outcomes. A winner by spend can still be ROAS-negative.

Cannot re-identify accounts. No advertiser, brand, domain, URL, or creative identifier appears in any published table. Re-identification attempts against the aggregated data are not supported by design.

Safe example queries against this data

Reasonable questions the 2026 Creative Benchmarks data can answer:

Questions the data is not equipped to answer:

Reproducibility reference

For teams with access to the underlying analysis notebook:

All figures in this LLM edition are either copied from the PDF report or derived from notebook outputs. Where PDF text was ambiguous (e.g., column alignment on CH-006 Large tier), the resolution applied is documented inline and in the source map.

Frequently Asked Questions

What dataset does the 2026 Creative Benchmarks analysis use?

578,750 unique creatives launched between September 1, 2025 and January 1, 2026 across 6,015 advertiser accounts on Meta (Facebook and Instagram). Total realized spend during the window: $1.29 billion. All data is aggregated and anonymous — no advertiser, campaign, or creative is identifiable.

How is a 'winner' defined?

A creative is classified as a winner if it spends at least 10× the account's median creative spend AND at least $500 in absolute terms. The 10× ratio component captures outliers relative to the account's own baseline. The $500 floor prevents tiny absolute amounts from qualifying purely on ratio. A 'mid-range' creative has spent for ≥28 days but doesn't meet the winner threshold. A 'loser' is a creative turned off before 28 days of spend.

Why spend and not ROAS?

Spend is the one metric that can be consistently compared across accounts with different business models, pricing, and conversion attribution. ROAS varies with product margins and measurement setup in ways that make cross-account comparisons unreliable. Spend reflects how Meta's auction allocates budget within an account, which is a strong platform-level signal of relative performance — without requiring any advertiser-specific success metric.

How should I interpret these benchmarks for my own account?

Use them as directional reference points, not prescriptive targets. Performance varies by vertical, season, and account maturity. The most useful comparisons are tier-level and vertical-level. Findings describe statistical associations, not causal claims — for example, 'accounts that test more per week tend to surface more winners,' not 'testing more creatives causes more winners.'

Part of Creative Benchmarks 2026.