How to run creative testing on Meta: A complete guide

|
Marketing

Our creative systems have helped generate over £400M+ in revenue for DTC brands over the past few years. Over those years, staying agile has been one of the main constants. The Meta landscape has changed drastically over the past 5 years, and 2025 is no different.

The rules of paid social have changed - again.

Meta’s new Andromeda engine has replaced tidy, rule-based targeting with a hungry, probabilistic system that lives on fresh data and limitless creative options. It can trial 5,000 ads a week, not 50, which means the brands that win are the ones that feed it a steady diet of diverse, insight-driven assets - and they do it fast.

This guide distils everything you need to thrive in that reality.

Inside, you’ll learn how to generate strategic creative volume, test with surgical precision, and scale winners without wasting budget. Whether you’re running £500 a day or £50k, you’ll get concrete frameworks, checklists, and campaign architectures that turn chaos into disciplined experimentation.

What is Meta Andromeda?

Over the past few years, Meta has been shifting away from hands on in the account towards automation. From the early days of automation with CBOs, all the way through to their newest iteration of Advantage+ dubbed Meta Andromeda.

Meta’s Andromeda update has fundamentally shifted creative testing dynamics. It requires a strategic volume-based approach over traditional concept isolation, it can now support 5000 ads a week, rather than 50. The brands that understand this and cater to Andromeda’s needs will be the brands that win on Meta.

Ultimately, performance now depends on strategic volume, speed, and iteration of creative. You need to focus on creating a high volume of content that appeals to distinctly different audience personas and preferences.

What is 'Soulmate theory'

“Soul-mate theory” is just shorthand for Andromeda’s goal: every ad impression should feel as if the advertiser crafted that exact image and line of copy specifically for the person viewing it, and the system now has the scale and smarts to try.

You should now operate under the theory that there is a perfect creative for each user. Instead of “Which audience fits this ad?”, the question becomes “Which specific ad version will resonate most with this single user right now?

The algorithm has moved from deterministic to probabilistic

But what does that actually mean in this context?

Deterministic (Old Way)

The algorithm behaved like a rulebook.

If a user matched certain parameters (e.g., location, age, interest in “running shoes”), the system would follow a fixed chain of logic to choose which ad to show.

Every time the same inputs came in, it produced the same output.

This meant Meta was limited, it was:

  • Rigid and failed to adapt to nuance or context.
  • Can’t easily exploit new opportunities or patterns in data.

Probabilistic (New Way)

The algorithm now behaves like a statistician + gambler + learner.

It doesn’t just look at hard rules; it weighs probabilities that a given ad will perform best based on vast, shifting signals: behaviour, history, context, creative, time of day, etc.

Two identical users might now see different ads, because the system is optimising based on the likelihood of engagement or conversion, not fixed logic.

This creates an algorithm that is:

  • Adaptive and dynamic, it learns from outcomes in real time.
  • Better performance at scale, especially with complex, noisy data.
  • Feels more “personalised” and responsive.

In plain English

Moving from deterministic to probabilistic means the ad-picking brain went from rigid “if-this-then-that” rules to a smart, ever-learning “best-guess” approach. Instead of guaranteeing the same ad sequence every time, it now plays the odds. Your customers get ads that feel more relevant, and you see better results, and the algorithm gets smarter.

Why it matters for e-commerce brands

Anyone who is involved with growing an e-com brand with Meta needs to understand this:

You need a high volume of distinctly different assets across all stages of awareness phases, underpinned by the psychological messaging frameworks that drive your core customers to convert. By doing this, you are more likely to serve the right ads, to the right people, at the right time.

How to run creative tests on Meta

When it comes to performance creative, “just make something that looks good” doesn’t cut it. Every ad you put out into the world is a chance to gather data - to learn what resonates, what converts, and what drives action etc. The best creatives aren’t just built to impress; they’re built to test. You should be dedicating 20% of your time to big swing concepts and 80% to making incremental progress through iteration.

Meta doesn’t just want more ads - it wants different ads.

When you design with testing in mind, you're not just guessing your way to a winning ad. You’re creating a clear path to get there faster using hypothesis-driven thinking, repeatable systems, and smart tweaks that let the data guide the next round.

With Meta’s Andromeda update, creative diversification is no longer optional. Ads that look too similar now get grouped and penalised - meaning that iteration must be about variation, not just minor tweaks

This section breaks down exactly what that looks like, including how to bake experimentation into the bones of your creative process.

The Method: Test Like You Mean It

Before you even think about shooting or designing anything, you should be thinking like a strategist.

Here’s the testing method we use:

1. Start with a hypothesis: e.g., “Urgency in the headline will lift CTR.”

2.Choose a creative baseline: Identify a top-performing ad concept to optimise. For example, a POV style video, before & after.

3. Isolate your variables: 3–6 key elements like the hook, creator, visuals, and format.

4. Run the test, track results, and log learnings: Capture what worked (and what didn’t) in your learnings tracker.

5: Diversify Beyond Similarity: Variations need to meaningfully differ in look, feel, or motivator - not just copy. Treat the first 3 seconds as a key variable as Meta uses this to determine similarity.

Thinking in ‘Isolated Variables’

When we say "create with testing in mind," we mean that every decision you make, from headline to voiceover tone, should double as a potential test. What learnings can you gather from each creative decision that you make?

Hook Strategy: It’s More Than Just the Words

Your hook isn’t just the words. It’s the visual moment that defines whether your ad is treated as new. Meta now groups ads as ‘the same’ if the opening scene looks similar, even if the script changes. That means testing hooks must include visual swings - different settings, creators, and styles - not just different lines

Here’s an example of how tests used to be run…

This leaned into making small changes that focused on:

  • Testing curiosity/emotion/identity variations.
  • Emphasis on words and positioning.

But these would all be flagged by Meta as looking too similar.

This is what Meta sees as differentiated - while they’re all static examples, the messaging and visuals are completely different.

Going forward:

  • Start one version with a creator selfie, another with a bold overlay, another with a demo shot - all targeting the same motivator.
  • Think of your first 3 seconds as a persona trigger. Curiosity for one group, urgency for another, empathy for a third.
  • Changing only the words while keeping the same shot no longer counts as a test.

Angle & Pain Point Testing: Get Specific

Then there’s angle testing, essentially, the main story or focus of your ad. You can also dial deeper into pain point specificity. Take menopause, for instance, that’s a broad category. But zoom in, and you’ll find very different resonant messages: “brain fog,” “joint pain,” “wrinkles.”

These micro-pain points can each become their own testable angle, each appealing to a slightly different subset of your audience.

That granularity often unlocks a stronger emotional connection and audience segments you might not have known existed without testing.

Angle testing isn’t just about the story you tell - it’s about who you’re telling it to. Meta rewards creative built for different personas. That means pairing each angle with a distinct visual and tone, so the ad not only resonates with a new motivator but also qualifies as a unique concept

Once you’ve found a winning angle/messaging?

Take that top-performing messaging and angles, but change the format and visuals in a way that makes it VERY different and therefore delivers to a different individual who might like to consume in a different way than the original message was delivered.

For example…

  • If the angle is “brain fog,” one execution could be a relatable demo by a young creator, another could be testimonial-led from a midlife woman, and a third could be a polished explainer, a fourth could be a static headliner, a 5th could be motion graphic-led - the list goes on.
  • Same product, same angle - but visually diversified to match different personas.
  • Don’t just say it differently - show it differently.

Offer Testing: Same Deal, Different Spin

One of the most effective tests you can run is on your offer positioning. Sometimes a small tweak in copy can completely change how compelling it feels.

You might test:

  • “Buy One Get One Free” vs. “50% OFF”
  • “Limited Time Only” vs. “While Supplies Last”
  • “90-Day Money-Back Guarantee” front and centre vs. in fine print

Here’s a bunch of offers you can test to see what resonates most with your audience.

Awareness-Level Messaging: Speak to Where They Are

People interact with your ad differently depending on how aware they are of their problem or your product. Your messaging should reflect that.

Let’s say we’re promoting a gut health supplement:

  • Problem Unaware (10% of budgets) : “Why you're always tired (and what your gut has to do with it)”
  • Problem Aware (40% of budgets): “Is your gut health behind your fatigue?”
  • Solution Aware (30% of budgets): “This gut-friendly supplement VS the competitor”
  • Product Aware (15% of budgets): “This gut-friendly supplement sold out 5 times. Here’s why”
  • Most Aware (5% of budgets): “Trusted by 100k+ women with gut issues. 20% off RN.”

Visual Testing: Change the Frame

The words can stay the same, but the visuals? A major lever. Even subtle shifts in setting, model, or colour palette can make or break performance.

Test visuals like:

  • Lifestyle vs. product demo
  • Product in hand vs. on table
  • UGC selfie style vs. polished commercial
  • Different backgrounds or demographics

Value Props & Social Proof: Frame It With Purpose

Every product has multiple value propositions, but which one resonates most?

Try framing like:

  • “Built to last” (quality-led)
  • “Save while getting the best” (price/value)
  • “Loved by 1M+ customers” (social proof)

Also test how you present that social proof:

  • UGC vs. quote overlays
  • Influencer clips vs. regular customer
  • Case study summaries vs. real-time reactions

Creative Diversification Playbook

Creative diversification has shifted from a nice-to-have to a growth essential. Ads that look too similar - especially in the first 3 seconds - are grouped, capped, and penalised.

That means testing is no longer just about iterations on a theme, but about building genuinely different concepts that speak to new people.

Here’s how to diversify in practice:

  1. Rotate PersonasDifferent motivators unlock different audiences. A product framed for busy parents, young professionals, or wellness-conscious seniors is three different ads - not one reskinned execution.
  2. Change the Hook, Change the AudienceThe first 3 seconds define whether an ad is considered unique. Open with a question, a bold claim, a visual demo, or a testimonial. Each hook type attracts a different audience segment.
  3. Push Bigger Visual SwingsMove beyond incremental tweaks. Swap locations, creators, production styles, and even formats (UGC selfie vs. high-production vs. static). If your ad looks the same on mute, it’s too similar.
  4. Recycle With Fresh ContextWhen an ad wins, give it a second life by re-shooting b-roll, changing the delivery style, or inserting new cultural references. The message can stay - the package must evolve.
  5. Layer Diversification Into BriefsDon’t leave variation to the edit room. Build persona insights into briefs from the start. Outline not just “what we’re saying,” but “who we’re saying it to” and “how it should look different.”

To Wrap It Up...

Creative testing used to be about more ads. Today, it’s about different ads.

Scaling in Meta isn’t about guesswork or re-skinning winners anymore. It’s about building with diversification baked in:

  • Hooks that open new doors.
  • Angles that resonate with distinct personas.
  • Visuals that shift context, not just copy.
  • Offers that feel fresh because they’re framed differently.

The ads that scale longest and cost least are the ones Meta sees as unique concepts and your audience sees as personally relevant.

When planning briefs, ask yourself:

  • Does this ad look too similar to what I’ve already run?
  • Which persona does this speak to?
  • What motivator or context am I unlocking here?
  • How will this learning fuel the next round of diversification?

How to create content for testing on Meta

If you’re serious about building scalable, high-performing ads, your creative should be designed for it.

That means structuring, scripting and shooting content in a way that allows you to pull it apart, reassemble it, and test different combinations of hooks, bodies, and CTAs without having to reshoot everything from scratch.

In this section, we’ll break down how to approach creative architecture in a test-friendly way.

The Modular Ad Framework

Your goal is to script and shoot in three interchangeable sections:

Each section should be written with 3–5 variations that hit the same angle from different emotional or messaging styles.

Let’s walk through how to approach each one in granular detail.

1. The Hook

Hooks are arguably the number one most important part of an ad.

They are your scroll-stopper. And your ad should never rely on just one.

Instead, script hooks across different strategy types with different angles and formats.

For example:

Hook Strategy Framework

Hook Ideation Table

🎬 Tip: Record hooks separately from the body. Keep them short (1–2 lines), and shoot with different tone options, or different creators which may resonate with different audiences.

2. The Body

Once you’ve got their attention, the body of the ad needs to deliver the story. But the key is: write and shoot the body in testable blocks, not one flowing monologue.

Each “block” should be:

  • 1–3 sentences
  • Self-contained
  • Focused on a single angle (e.g. credibility, benefit, proof)

Think of these as plug-and-play building bricks that can be reordered or swapped based on the hook or CTA.

Body Message Block Types ✍

3. The CTA

A good CTA is not always “Buy now.” You need to match audience intent and awareness level…

CTA Examples

Putting It Together

Here’s a template shooting structure you can copy and adapt for any campaign

Final Thoughts

If you walk away with one insight from this section, let it be this:

Don’t build ads. Build toolkits.

The highest-performing brands don’t rely on a perfect 60-second ad. They build modular, flexible, testable creative systems.

Now you can too!

How to structure creative tests on Meta

Dialled-in creative testing is the difference between “nice ads” and repeatable revenue. On Meta, the fastest path to truth is isolation - giving each idea, audience, or asset its own lane so the algorithm can hand you a clean verdict. This guide walks you through three battle-tested tiers, matched to the size of your testing budgets:

  1. Concept Isolation: Best for brands with serious firepower who need to know which big idea wins before sweating thumbnails.
  2. Persona Isolation: Ideal when distinct shopper mindsets exist, but spend is mid-range; you funnel budget into a few audience-specific ad sets to uncover message-market fit.
  3. Sprint Isolation: Built for lean budgets, consolidating every new creative into a single “sprint” ad set so every pound buys both learning and sales.

Tier 1: Concept Isolation Testing

In a nutshell, concept isolation testing works by spinning up separate ad sets for each big idea e.g., “Sleep-better angle”, “Doctor-endorsed angle”, “Price-drop angle”. Each ad set contains 3-15 creatives that only vary micro-elements like hooks or CTAs, so the ad-set results cleanly tell you which concept drives demand.

When you have the fire-power to fund multiple learning funnels in parallel, the smartest first question isn’t necessarily “Which thumbnail?”—it’s “Which idea resonates?” Concept isolation testing answers that question quickest, protects large budgets from diffuse spending, and sets the stage for downstream other tests once you’ve crowned a hero concept.

Why it works for higher budgets

  • Every concept gets its own budget line, so you’re funding multiple ad sets in parallel.
  • The playbook targets 50 + conversions per ad set per week to make a definitive call, this is only possible when total spend is big enough.

When to use

  • DTC brands that can regularly invest £5-10k/day in testing.
  • Launches or promos where you must decide quickly which idea deserves scale.

Structure: Concept-separated ad sets

Budget: AOV * 3+

Target: 50+ conversions per ad set weekly

Ad set creative volume: 3-15 creatives per concept

Cadence: 3-7 days

Testing volume: 50+ creatives a week

How to structure the concept isolation testing campaigns

This example assumes a testing budget of ~£100k and an AOV of ~£30 meaning a £100 daily ad set budget is a sound approach. The approach is agile due to budget flexibility.

Simplified campaign structure

Tier 2: Persona Isolation Method

The persona isolation testing method works by instead of splitting by idea, you split by audience persona. One ad set contains all variations meant for “Persona 1”, the next for “Persona 2”, and so on. Inside each ad set you drop 8-20 creatives that speak to that group’s pain points and gather learnings from there.

The fastest road to profit isn’t looking for more winning concepts. It’s about spot-lighting and homing in on different mindsets that already make up the bulk of your revenue, then letting Meta’s algorithm optimise within those lanes. Persona isolation gives you the statistical power, clarity, and budget efficiency to do exactly that.

Why it fits mid-range budgets

  • You consolidate spend into fewer ad sets - usually two - with larger daily budgets (about £250) so each hits the 25-50-conversion threshold and gathers statistically significant learnings.
  • You still learn which messages resonate, but without starving the algorithm.

When to use

  • Brands with distinct shopper cohorts (e.g., beginners vs. experts) but limited testing headroom.
  • Situations where audience-copy fit is the growth lever, not radical new angles.

Structure: Persona testing approach

Budget: AOV * 5+

Target: 25-50 conversions per ad set weekly

Ad set creative volume: 8-20 creatives per batch

Cadence: 7-10 days

Testing volume: 20+ creatives a week

How to structure the concept isolation testing campaigns

This example assumes a testing budget of ~£70k/monthly and an AOV of ~£50 meaning a £250 daily ad set budget is a sound approach. The approach is less agile as we need to adopt a more consolidated approach to maximise learnings and likelihood of budget ROI.

Simplified campaign structure

Tier 3: Sprint Isolation Method

Sprint-isolation testing means dropping every new creative you want to evaluate into one single “sprint” ad set and letting Meta’s algorithm sort the winners for a fixed, short window.

Instead of carving budget across multiple ad sets (concept or persona isolation), you consolidate spend and conversions in one place, so Andromeda’s retrieval layer can gather statistically useful data on each ad without starving any of them.

Why Sprint-Isolation Testing Is the Smart Play for Lower Budgets

When your monthly testing budget is under £30k (between £300-£1k per day), every pound must needs to work harder: it has to buy both learning and sales. Sprint isolation, running all new creatives in one highly-consolidated ad set for 10-14 days, is purpose-built for that reality.

Structure: Sprint based testing

Budget: AOV * 8+

Target: 15-25 conversions per sprint

Ad set creative volume: 10-14 creatives per batch

Cadence: 10-14 days

Testing volume: 10+ creatives a week

How to structure sprint isolation testing campaigns

This example assumes a testing budget of ~£15k/monthly and an AOV of ~£50 meaning a £150 daily ad set budget is a sound approach. The approach is highly consolidated to ensure budget is going to high performers in the shortest possible time.

Simplified campaign structure

Meta Optimisation and Scaling Framework

Launching a batch of fresh creatives is only half the battle, the real gains come from what you do in the first 30 days. Think of the process as a relay race with four clearly defined legs:

  1. Launch & Monitor (Days 1-3): Get everything live and resist the urge to meddle while Meta enters its Learning phase.
  2. Initial Assessment (Days 4-7): Focus on early health checks (delivery status, scroll-stop strength, CPM sanity) rather than bottom-line ROAS.
  3. Performance Evaluation (Days 8-14): Let conversion data speak, promote the ads that clear your CPA/ROAS bar, and sunset the true laggards.
  4. Scaling Decisions (Day 15+): Pour fuel on validated winners, either vertically (budget bumps) or horizontally (duplicate into scaling campaigns).

Following this cadence forces patience early, precision mid-stream, and aggression only when the numbers prove it. Stick to the guardrails below and you’ll turn chaotic launches into a disciplined scalable pipeline backed by data.

Phase 1: Launch and monitor (Days 1-3)

Phase 2: Initial Assessment (Days 3-7)

Actions

  • Pause ads only when oCTR < 0.8 % and CPM > 150 % of average (dual trigger = avoids false negatives).
  • Increase budget on ad sets showing promise and promote winning creatives into BAU/ASC scaling campaigns
  • Tag learnings in tracker: “Hook = Curiosity pull wins; Product-demo hook loses”.

Phase 3: Performance Evaluation (Days 8-14)

Primary question: Which ads deliver profitable conversions at scale?

KPITargetROAS≥ 100 % of goalCPA≤ target CPAConversion volume> 10 events

Optimisation moves

  • Keep ads with high conversion volume at > 85% ROAS target.
  • Pause ads with low conversions AND ROAS < 70% target.
  • Scale winners by 15-25% and continue to promote into scaling campaigns.

Phase 4: Scaling Decisions (Days 15+)

Scale indicators

Scaling method: IF high budget, scale horizontally and vertically. Continue to promote.

Disclaimer: If you’ve done the necessary checks and you have an outlying performer hitting the necessary performance benchmarks before this point - scale.

Live by these rules

Before you start clicking pause or scale, anchor yourself to three principles that keep every decision objective and every pound productive.

  1. Be Patient
    • 72-hour rule: No creative decisions in first 3 days
    • Conversion threshold: Minimum 10 conversions before evaluation
    • Learning phase: Allow 7-14 days for optimisation
  2. Volume Over Perfection
    • Andromeda: Algorithm favours more creative options
    • Testing velocity: Launch 10-60+ creatives weekly depending on budget
    • Iteration speed: Weekly creative refreshes for top performers
  3. Decision Matrix
    • IF Conversions ≥50 AND ROAS ≥ 150% target → SCALE & PROMOTE
    • IF Conversions 12-49 AND ROAS ≥ 100% target → PROMOTE
    • IF Conversions 5-14 AND ROAS < 90% target → MONITOR
    • IF Conversions <5 AND ROAS <70% → PAUSE

Weekly Budget Distribution Examples

This is how we’d suggest structuring budgets within your account.

High budget example (£50,000+/week)

In this example we have a higher proportion of ad spend allocated to existing proven ads due to account stability.

  • High performing creatives: £25,000 (50%)
  • Scaling existing tests: £7,500 (15%)
  • Testing and experimental: £15,500 (35%)

Lower budget example (£5,000/week)

In this example, we have a higher proportion of ad spend allocated to net new concept testing due to less stability and the need to allocate spend to the highest leverage point to find growth.

  • High performing creatives: £2,500 (50%)
  • Scaling existing tests: £750 (15%)
  • Testing and experimental: £1,750 (35%)

Checklists to Live By

Common mistakes to avoid

❌ Turning off ads too early (before 10 conversions)

❌ Testing too few creatives (volume is key in Andromeda)

❌ Ignoring learning phases (patience required)

❌ Over-optimising (let the algorithm learn)

❌ Budget fragmentation (ensure sufficient spend per test)

Testing checklist

✅ Determine budget tier and testing methodology

✅ Set up proper campaign structure

✅ Prepare creative volume based on budget

✅ Establish optimisation schedule and rules

✅ Define success metrics and KPIs

✅ Create weekly review and scaling process

✅ Document learnings for future tests

Is Your Creative Strategy Ready for the Andromeda Engine?

You now have the framework to succeed in 2025: High-volume modular content, variable isolation, and structured campaign architecture.

But knowing how to test is one thing having the resources to produce, analyze, and iterate is another.

If you’re ready to move from manual guessing to a data-backed creative machine, we can help. We’ve generated over £400M in revenue for DTC brands by feeding Meta’s algorithm exactly what it wants: diverse, high-performing creative at scale.

Don’t let creative fatigue stall your growth.

Book a call and chat to us here

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

ARE YOU READY TO

START SERIOUSLY
SCALING YOUR BRAND

We’re already helping 40+ online businesses scale their profits, so now is the perfect time to hop on board. We promise if we don’t improve your current ROI by 23%, we’ll give you your money back.

UNLOCK PROFITABLE
GROWTH

How to run creative testing on Meta: A complete guide

|
Marketing

Our creative systems have helped generate over £400M+ in revenue for DTC brands over the past few years. Over those years, staying agile has been one of the main constants. The Meta landscape has changed drastically over the past 5 years, and 2025 is no different.

The rules of paid social have changed - again.

Meta’s new Andromeda engine has replaced tidy, rule-based targeting with a hungry, probabilistic system that lives on fresh data and limitless creative options. It can trial 5,000 ads a week, not 50, which means the brands that win are the ones that feed it a steady diet of diverse, insight-driven assets - and they do it fast.

This guide distils everything you need to thrive in that reality.

Inside, you’ll learn how to generate strategic creative volume, test with surgical precision, and scale winners without wasting budget. Whether you’re running £500 a day or £50k, you’ll get concrete frameworks, checklists, and campaign architectures that turn chaos into disciplined experimentation.

What is Meta Andromeda?

Over the past few years, Meta has been shifting away from hands on in the account towards automation. From the early days of automation with CBOs, all the way through to their newest iteration of Advantage+ dubbed Meta Andromeda.

Meta’s Andromeda update has fundamentally shifted creative testing dynamics. It requires a strategic volume-based approach over traditional concept isolation, it can now support 5000 ads a week, rather than 50. The brands that understand this and cater to Andromeda’s needs will be the brands that win on Meta.

Ultimately, performance now depends on strategic volume, speed, and iteration of creative. You need to focus on creating a high volume of content that appeals to distinctly different audience personas and preferences.

What is 'Soulmate theory'

“Soul-mate theory” is just shorthand for Andromeda’s goal: every ad impression should feel as if the advertiser crafted that exact image and line of copy specifically for the person viewing it, and the system now has the scale and smarts to try.

You should now operate under the theory that there is a perfect creative for each user. Instead of “Which audience fits this ad?”, the question becomes “Which specific ad version will resonate most with this single user right now?

The algorithm has moved from deterministic to probabilistic

But what does that actually mean in this context?

Deterministic (Old Way)

The algorithm behaved like a rulebook.

If a user matched certain parameters (e.g., location, age, interest in “running shoes”), the system would follow a fixed chain of logic to choose which ad to show.

Every time the same inputs came in, it produced the same output.

This meant Meta was limited, it was:

  • Rigid and failed to adapt to nuance or context.
  • Can’t easily exploit new opportunities or patterns in data.

Probabilistic (New Way)

The algorithm now behaves like a statistician + gambler + learner.

It doesn’t just look at hard rules; it weighs probabilities that a given ad will perform best based on vast, shifting signals: behaviour, history, context, creative, time of day, etc.

Two identical users might now see different ads, because the system is optimising based on the likelihood of engagement or conversion, not fixed logic.

This creates an algorithm that is:

  • Adaptive and dynamic, it learns from outcomes in real time.
  • Better performance at scale, especially with complex, noisy data.
  • Feels more “personalised” and responsive.

In plain English

Moving from deterministic to probabilistic means the ad-picking brain went from rigid “if-this-then-that” rules to a smart, ever-learning “best-guess” approach. Instead of guaranteeing the same ad sequence every time, it now plays the odds. Your customers get ads that feel more relevant, and you see better results, and the algorithm gets smarter.

Why it matters for e-commerce brands

Anyone who is involved with growing an e-com brand with Meta needs to understand this:

You need a high volume of distinctly different assets across all stages of awareness phases, underpinned by the psychological messaging frameworks that drive your core customers to convert. By doing this, you are more likely to serve the right ads, to the right people, at the right time.

How to run creative tests on Meta

When it comes to performance creative, “just make something that looks good” doesn’t cut it. Every ad you put out into the world is a chance to gather data - to learn what resonates, what converts, and what drives action etc. The best creatives aren’t just built to impress; they’re built to test. You should be dedicating 20% of your time to big swing concepts and 80% to making incremental progress through iteration.

Meta doesn’t just want more ads - it wants different ads.

When you design with testing in mind, you're not just guessing your way to a winning ad. You’re creating a clear path to get there faster using hypothesis-driven thinking, repeatable systems, and smart tweaks that let the data guide the next round.

With Meta’s Andromeda update, creative diversification is no longer optional. Ads that look too similar now get grouped and penalised - meaning that iteration must be about variation, not just minor tweaks

This section breaks down exactly what that looks like, including how to bake experimentation into the bones of your creative process.

The Method: Test Like You Mean It

Before you even think about shooting or designing anything, you should be thinking like a strategist.

Here’s the testing method we use:

1. Start with a hypothesis: e.g., “Urgency in the headline will lift CTR.”

2.Choose a creative baseline: Identify a top-performing ad concept to optimise. For example, a POV style video, before & after.

3. Isolate your variables: 3–6 key elements like the hook, creator, visuals, and format.

4. Run the test, track results, and log learnings: Capture what worked (and what didn’t) in your learnings tracker.

5: Diversify Beyond Similarity: Variations need to meaningfully differ in look, feel, or motivator - not just copy. Treat the first 3 seconds as a key variable as Meta uses this to determine similarity.

Thinking in ‘Isolated Variables’

When we say "create with testing in mind," we mean that every decision you make, from headline to voiceover tone, should double as a potential test. What learnings can you gather from each creative decision that you make?

Hook Strategy: It’s More Than Just the Words

Your hook isn’t just the words. It’s the visual moment that defines whether your ad is treated as new. Meta now groups ads as ‘the same’ if the opening scene looks similar, even if the script changes. That means testing hooks must include visual swings - different settings, creators, and styles - not just different lines

Here’s an example of how tests used to be run…

This leaned into making small changes that focused on:

  • Testing curiosity/emotion/identity variations.
  • Emphasis on words and positioning.

But these would all be flagged by Meta as looking too similar.

This is what Meta sees as differentiated - while they’re all static examples, the messaging and visuals are completely different.

Going forward:

  • Start one version with a creator selfie, another with a bold overlay, another with a demo shot - all targeting the same motivator.
  • Think of your first 3 seconds as a persona trigger. Curiosity for one group, urgency for another, empathy for a third.
  • Changing only the words while keeping the same shot no longer counts as a test.

Angle & Pain Point Testing: Get Specific

Then there’s angle testing, essentially, the main story or focus of your ad. You can also dial deeper into pain point specificity. Take menopause, for instance, that’s a broad category. But zoom in, and you’ll find very different resonant messages: “brain fog,” “joint pain,” “wrinkles.”

These micro-pain points can each become their own testable angle, each appealing to a slightly different subset of your audience.

That granularity often unlocks a stronger emotional connection and audience segments you might not have known existed without testing.

Angle testing isn’t just about the story you tell - it’s about who you’re telling it to. Meta rewards creative built for different personas. That means pairing each angle with a distinct visual and tone, so the ad not only resonates with a new motivator but also qualifies as a unique concept

Once you’ve found a winning angle/messaging?

Take that top-performing messaging and angles, but change the format and visuals in a way that makes it VERY different and therefore delivers to a different individual who might like to consume in a different way than the original message was delivered.

For example…

  • If the angle is “brain fog,” one execution could be a relatable demo by a young creator, another could be testimonial-led from a midlife woman, and a third could be a polished explainer, a fourth could be a static headliner, a 5th could be motion graphic-led - the list goes on.
  • Same product, same angle - but visually diversified to match different personas.
  • Don’t just say it differently - show it differently.

Offer Testing: Same Deal, Different Spin

One of the most effective tests you can run is on your offer positioning. Sometimes a small tweak in copy can completely change how compelling it feels.

You might test:

  • “Buy One Get One Free” vs. “50% OFF”
  • “Limited Time Only” vs. “While Supplies Last”
  • “90-Day Money-Back Guarantee” front and centre vs. in fine print

Here’s a bunch of offers you can test to see what resonates most with your audience.

Awareness-Level Messaging: Speak to Where They Are

People interact with your ad differently depending on how aware they are of their problem or your product. Your messaging should reflect that.

Let’s say we’re promoting a gut health supplement:

  • Problem Unaware (10% of budgets) : “Why you're always tired (and what your gut has to do with it)”
  • Problem Aware (40% of budgets): “Is your gut health behind your fatigue?”
  • Solution Aware (30% of budgets): “This gut-friendly supplement VS the competitor”
  • Product Aware (15% of budgets): “This gut-friendly supplement sold out 5 times. Here’s why”
  • Most Aware (5% of budgets): “Trusted by 100k+ women with gut issues. 20% off RN.”

Visual Testing: Change the Frame

The words can stay the same, but the visuals? A major lever. Even subtle shifts in setting, model, or colour palette can make or break performance.

Test visuals like:

  • Lifestyle vs. product demo
  • Product in hand vs. on table
  • UGC selfie style vs. polished commercial
  • Different backgrounds or demographics

Value Props & Social Proof: Frame It With Purpose

Every product has multiple value propositions, but which one resonates most?

Try framing like:

  • “Built to last” (quality-led)
  • “Save while getting the best” (price/value)
  • “Loved by 1M+ customers” (social proof)

Also test how you present that social proof:

  • UGC vs. quote overlays
  • Influencer clips vs. regular customer
  • Case study summaries vs. real-time reactions

Creative Diversification Playbook

Creative diversification has shifted from a nice-to-have to a growth essential. Ads that look too similar - especially in the first 3 seconds - are grouped, capped, and penalised.

That means testing is no longer just about iterations on a theme, but about building genuinely different concepts that speak to new people.

Here’s how to diversify in practice:

  1. Rotate PersonasDifferent motivators unlock different audiences. A product framed for busy parents, young professionals, or wellness-conscious seniors is three different ads - not one reskinned execution.
  2. Change the Hook, Change the AudienceThe first 3 seconds define whether an ad is considered unique. Open with a question, a bold claim, a visual demo, or a testimonial. Each hook type attracts a different audience segment.
  3. Push Bigger Visual SwingsMove beyond incremental tweaks. Swap locations, creators, production styles, and even formats (UGC selfie vs. high-production vs. static). If your ad looks the same on mute, it’s too similar.
  4. Recycle With Fresh ContextWhen an ad wins, give it a second life by re-shooting b-roll, changing the delivery style, or inserting new cultural references. The message can stay - the package must evolve.
  5. Layer Diversification Into BriefsDon’t leave variation to the edit room. Build persona insights into briefs from the start. Outline not just “what we’re saying,” but “who we’re saying it to” and “how it should look different.”

To Wrap It Up...

Creative testing used to be about more ads. Today, it’s about different ads.

Scaling in Meta isn’t about guesswork or re-skinning winners anymore. It’s about building with diversification baked in:

  • Hooks that open new doors.
  • Angles that resonate with distinct personas.
  • Visuals that shift context, not just copy.
  • Offers that feel fresh because they’re framed differently.

The ads that scale longest and cost least are the ones Meta sees as unique concepts and your audience sees as personally relevant.

When planning briefs, ask yourself:

  • Does this ad look too similar to what I’ve already run?
  • Which persona does this speak to?
  • What motivator or context am I unlocking here?
  • How will this learning fuel the next round of diversification?

How to create content for testing on Meta

If you’re serious about building scalable, high-performing ads, your creative should be designed for it.

That means structuring, scripting and shooting content in a way that allows you to pull it apart, reassemble it, and test different combinations of hooks, bodies, and CTAs without having to reshoot everything from scratch.

In this section, we’ll break down how to approach creative architecture in a test-friendly way.

The Modular Ad Framework

Your goal is to script and shoot in three interchangeable sections:

Each section should be written with 3–5 variations that hit the same angle from different emotional or messaging styles.

Let’s walk through how to approach each one in granular detail.

1. The Hook

Hooks are arguably the number one most important part of an ad.

They are your scroll-stopper. And your ad should never rely on just one.

Instead, script hooks across different strategy types with different angles and formats.

For example:

Hook Strategy Framework

Hook Ideation Table

🎬 Tip: Record hooks separately from the body. Keep them short (1–2 lines), and shoot with different tone options, or different creators which may resonate with different audiences.

2. The Body

Once you’ve got their attention, the body of the ad needs to deliver the story. But the key is: write and shoot the body in testable blocks, not one flowing monologue.

Each “block” should be:

  • 1–3 sentences
  • Self-contained
  • Focused on a single angle (e.g. credibility, benefit, proof)

Think of these as plug-and-play building bricks that can be reordered or swapped based on the hook or CTA.

Body Message Block Types ✍

3. The CTA

A good CTA is not always “Buy now.” You need to match audience intent and awareness level…

CTA Examples

Putting It Together

Here’s a template shooting structure you can copy and adapt for any campaign

Final Thoughts

If you walk away with one insight from this section, let it be this:

Don’t build ads. Build toolkits.

The highest-performing brands don’t rely on a perfect 60-second ad. They build modular, flexible, testable creative systems.

Now you can too!

How to structure creative tests on Meta

Dialled-in creative testing is the difference between “nice ads” and repeatable revenue. On Meta, the fastest path to truth is isolation - giving each idea, audience, or asset its own lane so the algorithm can hand you a clean verdict. This guide walks you through three battle-tested tiers, matched to the size of your testing budgets:

  1. Concept Isolation: Best for brands with serious firepower who need to know which big idea wins before sweating thumbnails.
  2. Persona Isolation: Ideal when distinct shopper mindsets exist, but spend is mid-range; you funnel budget into a few audience-specific ad sets to uncover message-market fit.
  3. Sprint Isolation: Built for lean budgets, consolidating every new creative into a single “sprint” ad set so every pound buys both learning and sales.

Tier 1: Concept Isolation Testing

In a nutshell, concept isolation testing works by spinning up separate ad sets for each big idea e.g., “Sleep-better angle”, “Doctor-endorsed angle”, “Price-drop angle”. Each ad set contains 3-15 creatives that only vary micro-elements like hooks or CTAs, so the ad-set results cleanly tell you which concept drives demand.

When you have the fire-power to fund multiple learning funnels in parallel, the smartest first question isn’t necessarily “Which thumbnail?”—it’s “Which idea resonates?” Concept isolation testing answers that question quickest, protects large budgets from diffuse spending, and sets the stage for downstream other tests once you’ve crowned a hero concept.

Why it works for higher budgets

  • Every concept gets its own budget line, so you’re funding multiple ad sets in parallel.
  • The playbook targets 50 + conversions per ad set per week to make a definitive call, this is only possible when total spend is big enough.

When to use

  • DTC brands that can regularly invest £5-10k/day in testing.
  • Launches or promos where you must decide quickly which idea deserves scale.

Structure: Concept-separated ad sets

Budget: AOV * 3+

Target: 50+ conversions per ad set weekly

Ad set creative volume: 3-15 creatives per concept

Cadence: 3-7 days

Testing volume: 50+ creatives a week

How to structure the concept isolation testing campaigns

This example assumes a testing budget of ~£100k and an AOV of ~£30 meaning a £100 daily ad set budget is a sound approach. The approach is agile due to budget flexibility.

Simplified campaign structure

Tier 2: Persona Isolation Method

The persona isolation testing method works by instead of splitting by idea, you split by audience persona. One ad set contains all variations meant for “Persona 1”, the next for “Persona 2”, and so on. Inside each ad set you drop 8-20 creatives that speak to that group’s pain points and gather learnings from there.

The fastest road to profit isn’t looking for more winning concepts. It’s about spot-lighting and homing in on different mindsets that already make up the bulk of your revenue, then letting Meta’s algorithm optimise within those lanes. Persona isolation gives you the statistical power, clarity, and budget efficiency to do exactly that.

Why it fits mid-range budgets

  • You consolidate spend into fewer ad sets - usually two - with larger daily budgets (about £250) so each hits the 25-50-conversion threshold and gathers statistically significant learnings.
  • You still learn which messages resonate, but without starving the algorithm.

When to use

  • Brands with distinct shopper cohorts (e.g., beginners vs. experts) but limited testing headroom.
  • Situations where audience-copy fit is the growth lever, not radical new angles.

Structure: Persona testing approach

Budget: AOV * 5+

Target: 25-50 conversions per ad set weekly

Ad set creative volume: 8-20 creatives per batch

Cadence: 7-10 days

Testing volume: 20+ creatives a week

How to structure the concept isolation testing campaigns

This example assumes a testing budget of ~£70k/monthly and an AOV of ~£50 meaning a £250 daily ad set budget is a sound approach. The approach is less agile as we need to adopt a more consolidated approach to maximise learnings and likelihood of budget ROI.

Simplified campaign structure

Tier 3: Sprint Isolation Method

Sprint-isolation testing means dropping every new creative you want to evaluate into one single “sprint” ad set and letting Meta’s algorithm sort the winners for a fixed, short window.

Instead of carving budget across multiple ad sets (concept or persona isolation), you consolidate spend and conversions in one place, so Andromeda’s retrieval layer can gather statistically useful data on each ad without starving any of them.

Why Sprint-Isolation Testing Is the Smart Play for Lower Budgets

When your monthly testing budget is under £30k (between £300-£1k per day), every pound must needs to work harder: it has to buy both learning and sales. Sprint isolation, running all new creatives in one highly-consolidated ad set for 10-14 days, is purpose-built for that reality.

Structure: Sprint based testing

Budget: AOV * 8+

Target: 15-25 conversions per sprint

Ad set creative volume: 10-14 creatives per batch

Cadence: 10-14 days

Testing volume: 10+ creatives a week

How to structure sprint isolation testing campaigns

This example assumes a testing budget of ~£15k/monthly and an AOV of ~£50 meaning a £150 daily ad set budget is a sound approach. The approach is highly consolidated to ensure budget is going to high performers in the shortest possible time.

Simplified campaign structure

Meta Optimisation and Scaling Framework

Launching a batch of fresh creatives is only half the battle, the real gains come from what you do in the first 30 days. Think of the process as a relay race with four clearly defined legs:

  1. Launch & Monitor (Days 1-3): Get everything live and resist the urge to meddle while Meta enters its Learning phase.
  2. Initial Assessment (Days 4-7): Focus on early health checks (delivery status, scroll-stop strength, CPM sanity) rather than bottom-line ROAS.
  3. Performance Evaluation (Days 8-14): Let conversion data speak, promote the ads that clear your CPA/ROAS bar, and sunset the true laggards.
  4. Scaling Decisions (Day 15+): Pour fuel on validated winners, either vertically (budget bumps) or horizontally (duplicate into scaling campaigns).

Following this cadence forces patience early, precision mid-stream, and aggression only when the numbers prove it. Stick to the guardrails below and you’ll turn chaotic launches into a disciplined scalable pipeline backed by data.

Phase 1: Launch and monitor (Days 1-3)

Phase 2: Initial Assessment (Days 3-7)

Actions

  • Pause ads only when oCTR < 0.8 % and CPM > 150 % of average (dual trigger = avoids false negatives).
  • Increase budget on ad sets showing promise and promote winning creatives into BAU/ASC scaling campaigns
  • Tag learnings in tracker: “Hook = Curiosity pull wins; Product-demo hook loses”.

Phase 3: Performance Evaluation (Days 8-14)

Primary question: Which ads deliver profitable conversions at scale?

KPITargetROAS≥ 100 % of goalCPA≤ target CPAConversion volume> 10 events

Optimisation moves

  • Keep ads with high conversion volume at > 85% ROAS target.
  • Pause ads with low conversions AND ROAS < 70% target.
  • Scale winners by 15-25% and continue to promote into scaling campaigns.

Phase 4: Scaling Decisions (Days 15+)

Scale indicators

Scaling method: IF high budget, scale horizontally and vertically. Continue to promote.

Disclaimer: If you’ve done the necessary checks and you have an outlying performer hitting the necessary performance benchmarks before this point - scale.

Live by these rules

Before you start clicking pause or scale, anchor yourself to three principles that keep every decision objective and every pound productive.

  1. Be Patient
    • 72-hour rule: No creative decisions in first 3 days
    • Conversion threshold: Minimum 10 conversions before evaluation
    • Learning phase: Allow 7-14 days for optimisation
  2. Volume Over Perfection
    • Andromeda: Algorithm favours more creative options
    • Testing velocity: Launch 10-60+ creatives weekly depending on budget
    • Iteration speed: Weekly creative refreshes for top performers
  3. Decision Matrix
    • IF Conversions ≥50 AND ROAS ≥ 150% target → SCALE & PROMOTE
    • IF Conversions 12-49 AND ROAS ≥ 100% target → PROMOTE
    • IF Conversions 5-14 AND ROAS < 90% target → MONITOR
    • IF Conversions <5 AND ROAS <70% → PAUSE

Weekly Budget Distribution Examples

This is how we’d suggest structuring budgets within your account.

High budget example (£50,000+/week)

In this example we have a higher proportion of ad spend allocated to existing proven ads due to account stability.

  • High performing creatives: £25,000 (50%)
  • Scaling existing tests: £7,500 (15%)
  • Testing and experimental: £15,500 (35%)

Lower budget example (£5,000/week)

In this example, we have a higher proportion of ad spend allocated to net new concept testing due to less stability and the need to allocate spend to the highest leverage point to find growth.

  • High performing creatives: £2,500 (50%)
  • Scaling existing tests: £750 (15%)
  • Testing and experimental: £1,750 (35%)

Checklists to Live By

Common mistakes to avoid

❌ Turning off ads too early (before 10 conversions)

❌ Testing too few creatives (volume is key in Andromeda)

❌ Ignoring learning phases (patience required)

❌ Over-optimising (let the algorithm learn)

❌ Budget fragmentation (ensure sufficient spend per test)

Testing checklist

✅ Determine budget tier and testing methodology

✅ Set up proper campaign structure

✅ Prepare creative volume based on budget

✅ Establish optimisation schedule and rules

✅ Define success metrics and KPIs

✅ Create weekly review and scaling process

✅ Document learnings for future tests

Is Your Creative Strategy Ready for the Andromeda Engine?

You now have the framework to succeed in 2025: High-volume modular content, variable isolation, and structured campaign architecture.

But knowing how to test is one thing having the resources to produce, analyze, and iterate is another.

If you’re ready to move from manual guessing to a data-backed creative machine, we can help. We’ve generated over £400M in revenue for DTC brands by feeding Meta’s algorithm exactly what it wants: diverse, high-performing creative at scale.

Don’t let creative fatigue stall your growth.

Book a call and chat to us here

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

ARE YOU READY TO

START SERIOUSLY
SCALING YOUR BRAND

We’re already helping 40+ online businesses scale their profits, so now is the perfect time to hop on board. We promise if we don’t improve your current ROI by 23%, we’ll give you your money back.

UNLOCK PROFITABLE
GROWTH