Home / Course / Module 07

Module 07

Traffic-test campaign: find the winning combo cheap

12 min read · pairs with tools/facebook.

You have an offer, a pre-lander, working pixel + postback. Before you spend real money on a conversion campaign, you run a traffic-objective test: many small adsets, single creative each, optimized for the cheapest possible click — not for sales. The goal is to find which angle × audience × creative combination produces the cheapest lander CTR. That combination is your winner; you'll graduate it to a sales campaign in Module 08.

Why "Traffic" objective for testing, not "Conversions"

Conversions optimization needs ~50 purchase events/week per adset to stabilize. At ~1% conversion and $30 average payout, that's $1,500+/week per adset just to start getting useful signal. You can't afford that per test, and you'd be optimizing on noise anyway.

Traffic objective stabilizes on clicks. Clicks are 50× more frequent than purchases, so the algorithm learns fast and gives you reliable CPC + landing-page CTR data within 2–3 days at $5/day/adset.

What you measure in a traffic test

CPM (cost to show your ad 1,000 times — proxy for audience saturation), CTR (link clicks ÷ impressions — proxy for hook strength), Lander CTR (CTA clicks ÷ landing-page views — proxy for lander quality), CPC (cost per landing-page view — the bottom line of the test).

Set up the audiences

You're testing 3 audiences in parallel per creative. Mix interest-based, broad, and lookalike:

  • Broad / no detailed targeting in the offer's primary country. Wide enough that the algorithm finds buyers itself.
  • Interest stack (3–5 related interests in the niche). e.g. blood sugar: diabetes mellitus, low-carb diet, glucose meter, blood glucose monitoring.
  • Lookalike (1–3% of country) seeded from a Custom Audience. Day one you don't have your own audience yet — use either (a) a hashed seed list (Module 03 footnote), or (b) skip lookalike on the first test and add it once your pixel has 100+ Lead events.

Uploading a Custom Audience as a lookalike seed

Business Manager → Audiences → Create → Custom Audience → Customer List → upload your SHA-256 hashed file (one hash per line). Wait 30–60 min for processing. Then Create → Lookalike → seed = that custom audience → country + 1% size.

Operator equivalent: fb_upload_custom_audience(hash_file, name) then fb_create_lookalike(seed_audience_id, country="US", ratio_pct=1).

Set up the creatives

From Module 04 you have 5+ spied competitor ads with their hooks. Pick the 3 strongest hooks. For each hook, render your version:

  • Image ad (1080×1080 square — works in feed + reels)
  • Short video (9–30 sec, 1080×1920 portrait — works in stories + reels + feed)
  • Primary text + headline matching the hook's angle

3 hooks × 1–2 formats = 3–6 distinct creatives. Don't go above 6 in your first test or you'll fragment spend.

The 3×3 test structure

One campaign. Multiple adsets. The matrix:

Audience A (Broad)Audience B (Interest)Audience C (Lookalike)
Creative 1Adset A1Adset B1Adset C1
Creative 2Adset A2Adset B2Adset C2
Creative 3Adset A3Adset B3Adset C3

9 adsets total. Each adset: 1 creative + 1 audience + $5/day. Total spend: $45/day. Run for 2–3 days = $90–$135 for the entire test. Cheap.

Don't put multiple creatives in one adset. Facebook will pick a "winner" within the adset within hours and only show that one. You'll learn nothing about the others. One creative per adset.

Campaign settings that matter

  • Objective: Traffic (or "Engagement → Link clicks" in the newer ODAX UI).
  • Optimization: Landing Page Views (not Link Clicks — LPV requires the pixel to confirm the page actually loaded, filtering accidental clicks).
  • Budget: $5/day, adset-level (not CBO at this stage — CBO concentrates spend on the algorithm's pick before you have data).
  • Schedule: standard pacing, no day-parting.
  • Placement: Advantage+ Placements ON (let FB pick — it's surprisingly good).
  • Destination: your pre-lander URL with UTM tags so you can attribute back in your tracker.

Reading results after 48 hours

After 2–3 days at $5/day, each adset has spent $10–$15 and shown your ad 3,000–8,000 times. Sort the adsets by landing-page CTR (LPV ÷ Link Clicks). Then by CPC.

ResultWhat to do
LPV-CTR > 15% AND CPC < $0.50Winner. Graduate to Module 08 sales campaign.
LPV-CTR 8–15% AND CPC < $0.80Promising. Run another 48 hours at same budget; let it stabilize before pivoting.
LPV-CTR < 8% OR CPC > $1.00Kill. Either the creative doesn't stop the scroll or the audience doesn't care.
High CTR (people click the ad) but low LPV-CTR (people bounce from lander)Ad-to-lander mismatch. The ad is making a promise the lander doesn't pay off. Fix the lander headline.

Getting ads approved without drama

Most rejections come from policy (Module 02). The other half come from landing page issues — Facebook scans your pre-lander too, and if it claims something the ad doesn't repeat, you can get hit. To minimize:

  • Match the ad copy and the lander headline word-for-word at the top.
  • Avoid before/after imagery on the lander even if your ad doesn't have it.
  • Add a discreet disclaimer footer to the lander: "Results not typical."
  • Use a clean .com domain, not a sketchy TLD.

How the operator runs the test

fb_create_traffic_test(niche, daily_budget, adset_count, creative_ids, audiences, destination_url) builds the whole 3×3 matrix in one call. The agent assembles the inputs by running spy → angle selection → lander build → pixel install first, then hands them all to this tool.

For the first test, the operator pauses at the launch step asking for human y/n — that's a real spend going live. After you've smoke-tested once, you can pass --auto-approve to let subsequent tests fire without prompts.

Action: Build the 3×3 matrix. Launch at $45/day total. Set a calendar reminder to read results in 48 hours. Resist the urge to check Ads Manager every hour — the algorithm needs time to find buyers.