№ 002STRATEGY2026-05-12

Why your AI video ads underperform — and how analyzing winners changes the math

Same model. Different brief. Different result. Why analyzing winning ads is the missing input — and what changes when you stop writing brand briefs.

PeterPeter’s Lab
RUNTIME 8 MINPUBLISHED 2026-05-12TOPIC STRATEGYISSUE № 002
A side-by-side: a generic AI-generated UGC ad with low engagement metrics on the left, vs. a winning-ad-derived AI video ad with strong engagement on the right. The visual difference isn't model quality — it's the brief the model was given.
A side-by-side: a generic AI-generated UGC ad with low engagement metrics on the left, vs. a winning-ad-derived AI video ad with strong engagement on the right. The visual difference isn't model quality — it's the brief the model was given.

You spent real money testing AI video ads this quarter. The results landed in the same range every time — CTRs in the low fractions, ROAS under 1×, the creative team blaming the model and the model team blaming the creative team. The conclusion most people land on is that AI video ads just don't work yet, or that they only work for brands with massive budgets and human-in-the-loop creative directors.

That conclusion is wrong, and the reason it's wrong is more interesting than "use a better model."

The model is not your problem

The thing that makes a UGC video ad convert is not the realism of the face, the smoothness of the camera, or the production polish. It's the script — specifically, the mechanics of the script. Where the hook lands. How the problem is framed. When the product enters the frame. What proof gets stacked. How the CTA is delivered.

Every model in the AI video stack — Seedance, Veo, Kling, the next one shipping next week — is a translator. You hand it a script and it produces a video. If the script is weak, the video is a high-fidelity rendering of a weak script. The model didn't fail. It executed exactly what you asked for.

So when an AI-generated video ad underperforms, the diagnosis isn't "the AI is bad." It's "the brief I gave the AI was wrong." And the brief is almost always wrong in the same way.

What a wrong brief looks like

When most teams write a brief for an AI video ad, they write a brand brief. They list the product features, the value props, the brand voice guidelines, the tone words ("warm but confident, never pushy"), the demographic ("women 28–45, suburban, eco-conscious"). Then they ask the model to "make a UGC ad about this."

What they've handed the model is a recipe for an advertorial — a piece of content that describes the brand. What they actually wanted was a recipe for an ad that converts — a piece of content engineered to interrupt the scroll, plant a curiosity gap, deliver a problem-agitate-solve loop, and close with a CTA the viewer can't ignore.

These two things look similar on a Notion page. They are not the same artifact. The first one rounds off into the same generic AI UGC video everyone has seen on their feed: a smiling presenter, neutral background, "I love this product because…", soft cut to a packshot, "link below." Polished. Forgettable. Zero conversion.

The second one is what a senior creative strategist writes after they've watched a hundred winners in your category and learned which mechanics carry the ad. That's the brief the model needed. That's the brief that produces a video ad that actually performs.

Where to find the right brief

The right brief is sitting in your competitor's ad library, in the ads that have been running for 90+ days. An ad that's been live that long is paying for itself. The mechanics of that ad are the brief you wanted.

The hard part isn't getting access — Meta Ad Library is free. The hard part is reading the ad with enough rigor to extract the mechanics, not just the surface. Watching the ad once tells you what happens. Watching it ten times tells you what happens, slowly. What you need is a structured pass that names the hook mechanic, traces the emotional arc, marks where the product enters, grades each beat against a rubric, and tells you which 70% is replicable and which 30% is "you'd need that specific creator to pull it off."

A senior strategist can do this — in about 20 minutes per ad. Multiplied by the ten ads in a competitor library, multiplied by every category you're testing, this work breaks. Most teams skip it, write the brand brief instead, and end up where this post started: AI ads that look fine and convert poorly.

What an analysis-driven brief gives you

When you start the AI video pipeline from a winning-ad analysis instead of a brand brief, four things change at once.

The hook stops being generic. Instead of "Hey beauties, today I want to talk about…", the model writes a hook with a specific mechanic — pattern interrupt, numerical confession, false-belief shatter, curiosity gap with a specific gap to close. These are the mechanics the winner used; the AI inherits them.

The emotional arc has shape. A bad AI ad is monotone — same energy from second 1 to second 30. A winner moves the viewer: vulnerable to defiant, casual to urgent, neutral to revelatory. When the analysis flags that arc, the AI script can replicate it.

The product reveal is staged. In a generic AI ad, the product appears immediately and stays in frame. In a winner, the product enters at a specific beat — usually after the problem is fully named — and the staging matters as much as the timing. Analysis names that beat. Generation places it.

The CTA becomes specific. "Link below" is the laziest CTA. Winners specify, urge, and remove friction in the same breath. Replicating the CTA mechanic — not the words — gives the AI ad the same close.

The four dimensions an analysis-driven brief carries through into the generated video: hook mechanic, emotional arc, product reveal, and CTA mechanic. Each one maps a generic-AI failure mode to a winner-derived alternative.
The four dimensions an analysis-driven brief carries through into the generated video: hook mechanic, emotional arc, product reveal, and CTA mechanic. Each one maps a generic-AI failure mode to a winner-derived alternative.

None of this requires a better video model. It requires a better brief, derived from a winner that already proved the mechanics work.

What this looks like end-to-end

The pipeline is paste a URL → get a graded analysis report in 90 seconds → confirm the script the system writes from that analysis (you can edit it) → generate the AI video ad with your brand, your product, your character. The whole loop is under 10 minutes for a 15-second ad. The output is a Meta DCO-ready asset pack with five hook variants you can A/B from day one.

You don't have to be the team's senior creative strategist. The analysis layer does that work, on every ad, every time, against the same rubric.

What's still hard

Two things, honestly.

Categories that win on calm authority — wellness, B2B, finance — don't read as "winner" the same way high-energy DTC does. The same analysis pipeline runs, the same generator runs, but the mechanics are quieter and the AI's instinct still leans dramatic. We're handling this with category tags right now; the longer fix is per-category prompt branches.

Long-form replication — anything past 30 seconds — is on the roadmap, not in production. The architecture supports it; the quality curve doesn't go flat at that length yet. For now, treat the system as best-in-class for 8–20s creative, which covers the bulk of paid-social inventory anyway.

The takeaway

If your AI video ads are underperforming, do not switch models. Switch what you give the model. The mechanics that make a winning UGC ad win are observable, structured, and transferable — and the part of that work that doesn't scale by hand is exactly the part an analysis layer is built to do.

Start from a winner. Let the analysis carry the mechanics into your script. Let the generator render it. The math changes.


If you want to see the analysis on an ad in your category, the CTA below will take you straight to it. The fastest way to understand the difference is to run one ad you're already studying through the loop and read the report.