Creative Benchmarks 2026
Only 4–8% of Ads Are Winners
And that is true whether you spend $10,000 a month or $1 million. The difference between the brands at 4% and the brands at 8% is not budget. It is system.
Published: March 2026
4.0%
Micro hit rate
Accounts spending <$10K/mo
8.8%
Enterprise hit rate
Accounts spending $1M+/mo
63.7%
Enterprise spend on winners
Budget concentrated on scaling ads
23.0%
Micro spend on winners
Budget spread thin across everything
What Is a Winning Ad?
Before any hit rate number means anything, you need a precise definition of “winner.” In this dataset (578,750 ads across 6,015 accounts and $1.29 billion in realized spend), a winning ad meets two thresholds simultaneously.
First, it must spend at least 10 times the account's median ad spend. This is a relative measure. It means the ad dramatically outperformed the typical creative in that specific account, not that it hit some arbitrary dollar figure. Second, it must cross an absolute floor of $500 in total spend. This eliminates statistical noise from low-spend accounts where a few dollars of variance could create false positives.
The dual-threshold approach matters. A high relative bar means you are measuring genuinely exceptional performance, not just “above average.” The absolute floor means small accounts cannot inflate the dataset with winners that never actually scaled.
By contrast, a loser is any ad turned off before 28 days of active spend. A mid-range ad ran for 28 or more days but never reached the 10x threshold. Mid-range ads are not failures. They keep accounts stable, provide learning data, and sometimes contribute meaningful revenue. But they are not the ads that change your growth trajectory.
Hit Rate by Spend Tier
The pattern is clear but the range is narrow. Micro accounts (under $10K/month) see a 4.0% hit rate. Enterprise accounts ($1M+) reach 8.8%. That is a 2.2x difference, meaningful but not the order-of-magnitude gap you might expect given the budget disparity between these tiers.
The ceiling matters. Even enterprise accounts (with dedicated creative teams, testing infrastructure, and millions in monthly spend) convert fewer than 1 in 10 ads into winners. This is not a failure of execution. It is the nature of creative performance. Most ads are, by mathematical definition, average. Winners are rare events.
The jump from micro to small ($10K–$50K) is the steepest: 4.0% to 6.4%. This suggests the first meaningful investment in creative testing volume pays the largest marginal return. After that, gains flatten. Moving from medium to large to enterprise adds less than one percentage point at each step.
Testing volume drives this. Micro accounts launch an average of 2.8 new creatives per month. Small accounts: 4.1. Enterprise: 18.8. More launches means more chances for a winner to emerge, but it also means the denominator grows, which is why hit rate does not scale linearly with spend.
Where Does the Money Actually Go?
Roughly half of all ads are losers. Across every tier, from micro to enterprise, the loser percentage hovers between 49% and 54%. If you expected larger budgets to produce fewer losers, the data disagrees. Enterprise accounts actually have a slightly higher loser percentage (52.2%) than small accounts (49.3%), because they test more aggressively and kill faster.
Winners make up 3.7% of micro-account portfolios and 8.2% of enterprise portfolios. The gap is real but modest. The remaining 38–46% are mid-range ads. These are the ads that ran for 28 or more days, contributed revenue, but never reached escape velocity.
Mid-range ads deserve more attention than they get. They are not glamorous. They do not appear in case studies. But they keep accounts running while you search for the next winner. A healthy portfolio needs mid-range ads the way a baseball team needs consistent .260 hitters. They maintain the baseline while you wait for the home runs.
The real question is not how many ads fall into each bucket. It is how much money flows to each bucket. That is where the gap becomes a chasm.
The Sophistication Gap
This is the most important chart on this page. It shows the same five spend tiers, but instead of counting ads, it shows where the money goes. The difference is stark.
Enterprise accounts concentrate 63.7% of their total spend on winners. Only 13.8% goes to losers. Micro accounts are nearly the inverse: 23.0% to winners and 31.5% to losers. The difference is not creative quality. Both tiers have roughly the same winner-to-loser ratio in their portfolios. The difference is what happens after winners are identified.
Enterprise brands have systems (automated rules, dedicated media buyers, creative analytics tools) that detect winners within days and aggressively shift budget toward them. Losers get killed fast. Mid-range ads get maintained at low spend. Winners get the majority of the budget.
Micro brands do not have these systems. Spend stays distributed evenly. Losers run longer than they should because nobody is watching the data daily. Winners do not get scaled because the budget is already allocated across everything else. The result: 31.5% of budget going to ads that never had a chance.
On a $10,000/month budget, 31.5% going to losers means $3,150 per month spent on ads that were turned off within 28 days. That is $37,800 per year. On a $50,000/month budget at the small tier (25.7% to losers), that is $12,850/month or $154,200 per year.
The sophistication gap is not about making better ads. Both micro and enterprise accounts produce roughly the same ratio of winners to losers. The gap is about what you do with winners once you have them. It is an operations problem, not a creative problem.
Why Hit Rate Can Be Misleading
Consider two accounts. Account A launches 50 new creatives in a month. Five become winners. That is a 10% hit rate. Account B launches 5 new creatives. One becomes a winner. That is a 20% hit rate. Which account is performing better?
Account B has double the hit rate. But Account A has five times more winners. In absolute terms (in revenue, in scalable assets, in compounding creative intelligence), Account A is in a stronger position. Its lower hit rate reflects higher testing volume, not worse creative.
This is the paradox embedded in the data. Enterprise accounts have higher hit rates and higher testing volumes. But for smaller brands, a high hit rate can mask a dangerous lack of experimentation. If you are only launching a few ads per month, your hit rate is a noisy statistic. One lucky win inflates it. One bad month destroys it. The sample size is too small to mean anything.
Hit rate also hides the quality of the winners. A 10% hit rate where winners scale to $50K in spend is categorically different from a 10% hit rate where winners cap at $2K. The first account has scalable creative assets. The second has statistical artifacts.
Hit rate tells you how often rare events happen. It does not tell you how good your creative team is. It does not tell you whether your winners are actually scalable. And it does not tell you whether your testing volume is high enough for the number to mean anything. Use it as context, not as a KPI.
From Benchmark to Action
What the data tells you
- 4–8% of ads win, regardless of budget
- Half of all ads are turned off within 28 days
- The gap is in spend allocation, not creative talent
- More testing volume leads to more winners in absolute terms
What the data cannot tell you
- Why any specific ad won
- Which hook type, beat structure, or proof placement drove the result
- How to replicate a winner's structural formula
- What the winning pattern looks like inside your category right now
Benchmarks tell you the shape of the game. They tell you that winning is rare, that most money is wasted, and that sophistication lives in operations, not just creativity. This is valuable context. But context alone does not produce winners.
Knowing the hit rate is like knowing the batting average for the league. Useful context. But it does not teach you how to swing. What changes outcomes is understanding why specific ads win: the hook that stopped the scroll, the beat structure that held attention, the proof placement that converted, the psychological mechanism that made the viewer care. That is structural intelligence. That is what turns a 4% hit rate into a 8% hit rate without spending more money.
Only 8% of ads win. Heista decodes what the 8% have in common.
Decode any winning ad in your category. See the formula, the hook type, the beat structure, the proof placement. Then generate your version.
See what winners look like insideFrequently Asked Questions
Continue Reading
Every Ad Crushing the Feed.
Every Video Going Viral.
Every Winner in Your Ad Account.
Heist Them. Make Them Yours.
Get StartedFree to start. No credit card required.