Asset type is the medium a Meta ad is built in — text-only, UGC, high production, illustration, animation, carousel, and so on. The 2026 dataset ranks 15+ asset types by hit rate and spend use ratio, and the pattern is counterintuitive for teams that associate "better creative" with "higher production value."
Top asset types by hit rate
In approximate rank order:
Text only, Product image with text, Lifestyle-product image, UGC, High production, GIF, Illustration, UGC mashup, Lifestyle-product image with text, Lifestyle image with text, Lifestyle image, Hybrid, Product image, Animation, Carousel.
Hit rates across this band run approximately 4–12%. The top of the list is text-forward and UGC-forward — not high-production.
Top asset types by spend use ratio
Close but not identical:
Text only, Product image with text, Illustration, UGC, Lifestyle-product image with text, Lifestyle image with text, UGC mashup, Hybrid, Product image, High production, GIF, Lifestyle image, Lifestyle-product image, Animation, Carousel.
Spend use ratios roughly 0.5–1.9. Text-only and Product-image-with-text punch above their weight; Animation and Carousel under-capture spend relative to their volume share.
Why text-forward wins
Three mechanisms explain why text-heavy and UGC-forward assets dominate both leaderboards:
- Speed and clarity. A text-forward creative communicates the offer, the proof, and the CTA without requiring the viewer to parse a more elaborate visual. In a feed scanning environment, speed is the game.
- Iteration velocity. Text assets can be re-produced and re-tested in hours. A high-production video takes days or weeks. Over a quarter, a team running text-forward tests will have made dozens of iterations where the high-production team has made two or three.
- Low production floor. The cheapest text-only creative can still win. The cheapest high-production creative usually doesn't — if you're going to invest in high production, you have to do it well, which adds risk.
The role of high production
High-production assets sit in the middle of the hit-rate leaderboard, not the bottom. They still produce winners at a healthy rate (roughly tier-average). But they play a different role in a portfolio:
- Credibility establishment. High-production assets signal brand maturity and seriousness. That signal matters even when the asset itself isn't the biggest winner.
- Scaling known winners. A text-forward asset that identifies a winning creative angle can be "upgraded" to a high-production version once the angle is proven. High production is sometimes more useful as a second stage than as a first test.
- Category norms. Some verticals (Automotive, Technology, Finance, Travel & Hospitality) expect high-production baseline quality. In those categories, high production is table stakes rather than differentiator.
The portfolio read
A creative operation that only runs text-forward assets is leaving brand-credibility value unrealized. A creative operation that only runs high-production assets is leaving testing-velocity value unrealized. The data suggests the strongest accounts run both, with deliberate role assignment:
- Text-forward and UGC: Primary testing and winner-discovery layer.
- Lifestyle-product images, Hybrid, GIF: Workhorse middle layer.
- High production, Animation: Brand anchor and winner-scaling layer.
Caveats
- 2026 data window. Sep 2025 – Jan 2026 includes BFCM. Text-forward and offer-heavy asset types are particularly well-suited to the promotional cycle. In a steady-state period, the gap between text-forward and high-production may narrow.
- Asset type is coarse. "High production" covers a wide range of quality and cost. The dataset doesn't separate $10K-production assets from $100K-production ones.
- Vertical matters. Same vertical-dependence applies here as for formats and hooks. An asset type that wins in DTC Beauty may not win in B2B Technology.
Methodology notes
- Suppression: Only asset type segments with ≥50 accounts appear in the published leaderboard.
- Classification: Asset type is a single label per creative, assigned via visual analysis during the tagging step.