The Most Active and Friendliest
Affiliate Marketing Community Online!

“Adavice”/  “1Win

Setting up Split Tests on Facebook

Michael38

New Member
affiliate
I launched a couple ads just to get an idea of what´s working. Let´s say ad A, B, C.
Ad A outperformed ads B and C combined by around 300%. So I declared it a winner.

Then I created a new campaign in Ads manager (as it wouldn´t let me change the original one into a facebook split test), used the same targeting as before and launched the same ad with 5 different images. Got almost no results.

Also, when I checked the stats of the winning ad from the original set, there was a specific (Age + Gender) group that performed exceptionally well. I tried to duplicate the original Adset and changed the targeting to this Age + Gender group (kept everything else as it was) and launched that just with the winning ad. It´s getting no results as well.

What am I doing wrong here and how do you go about setting up split tests if the same Ad has good results in one Adset and no results in another one?
 
  1. How big was your initial sampling is the split testing process?
  2. Could the observed results been a statistical anomaly caused by a trend that was short lived for reason of a small sample size?
 
1; There were around 43 million people within the targeted interest and the Ads reached nearly 3000 people altogether until one of them had above 95% confidence level on the statistical significance calculator.

2; I´m not sure how to find out ?

Also, from what I´ve heard even duplicated Adsets don´t always perform the same so is it advisable to take into account that the facebook algorithm might simply optimize one adset better than the other ?
 
I understand one can reach pretty high CTR by making false claims etc. which is not what we´re after.

What I look at is the furthest point within the campaign I can optimize for. Between the ad and the offer I collect emails of people that go through the landing pages. Since I have no option to have a pixel after the offer page at the moment I can only optimize for the leads I´m getting for myself.

Thus I mostly look at how many leads each ad brings in and it so happened in this case that the ad with most leads also had the highest CTR.

How would you recommend to optimize and test correctly then?
 
Reach means nothing in a statistical analysis of CR (Conversion Ratio) that's typical Facebook gobbledygook.

Reach/Affinity/CTR might be a useful metric it you are trying to evaluate a display ad's effectiveness.
However, this number can be subjectively twisted by making outlandish claims and by the offer (product's) value proposition not matching with the advertised incentive.

Simplistically said; false advertising of a product that does not deliver what the ad says it will.

So 3,000 impressions is the best you could do --but the CTR and the CR (of that 'best') is what matters to you in the end game.
 
The goal is to get a campaign profitable, while also capturing emails to promote offers on the back-end. That´s true, stuff like repeat sales and lifetime customer value is something I cannot calculate without data, that´s why I don´t really bring it into the equation.

The point is to have lower CPC than EPC, which I guess is what people are usually after. Also it needs to be significantly higher to account for software and other expenses. Thus I need to optimize for spending less on ads than I make on the sales of the first offer.

"show me the data ..." Are we talking data from the initial test like impressions, clicks etc?
 
Are there any resources you would recommend to study more about campaign optimization or is this insight acquired mostly by testing and experience?

The reason I´m asking is because everywhere I read people keep saying just split test age groups, then a couple images, some headlines, I see these people doing tests with $300 and they either get the "winner" or move on.

I just can´t seem to figure out how they do it. I´ve even joined a more private forum with some seasoned marketers and their advice is not much different.

I´d like to learn the way you suggest though because it seems more systematic and not purely based on guessing. However, to reach 576 conversions I´d need to spend thousands of dollars. How do all these people test on such tiny budgets?
 
show me the data ...
you have too many factors.
what is the point (the goal)? and is the possible combination?
  • a) cost per click through
  • b) cost per conversion
  • c) ???
  • Then quantity can matter; 1000 CTR, 40 CR @$3.50/per sale v. 400 CTR, 30 CR @$2.8
You need huge data to factor things like customer acquisition and repeat sales pro forma projections -- I am assuming that is the point of the *email subscribe* factor? Don't over think this without having historical date to use.
 
Are you saying then that for every ad within a test there should be at least 576 clicks and roughly 11 sales to be able to say which group performs the best? Or is it per all the ads altogether. If I´m testing A vs B vs C that all lead to the same offer, do I need 576 for each or altogether?

How about ads that are simply not getting results, when is the time to kill them ?
 
It makes sense.

Once we reach 10+ sales and pause the other ads how do we approach the next steps? Do we look into the data and find what kind of audience converted best and thus adjust targeting?
 
"show me the data ..." Are we talking data from the initial test like impressions, clicks etc?
No --server logs, url paths; You cannot profile and segment ads and their traffic demographics from simple tracker or dashboard data. Click stats don't tell you much IMHO. A small number of conversions is only a *possible* indicator too.
  • 576 CTR per ad should yield a marginally reliable statistic for CR and EPC
  • get a few hundred conversions with the best ad and you can have data on the demographics of each conversion
  • optimize for a few of the best converting demographics where a pattern is seen, if it is seen.
Facebook's persona demographics are? What ever they want to tell you --they are relative to each other --that's it.
 
I like that idea. I guess it requires much more conversion data though to find a trend in there, but definitely a thorough way to do it.

Are you suggesting then that facebook's "auto optimization" and their algorithm are not to be relied on?
 
That makes sense. Basically, unless we have learned data the optimization process done on just fb data can be pretty inaccurate.

However, you can probably learn location based on someone's IP, then derive housing and income data based on that. But the IP won' t show you data like age and gender. How else do you retrieve them other than asking people directly?
 
What number is relevant to you?
576 CTR <<click thru not *CTR* and not meaning conversions
576*0.02=11.52
576 clicks x2% Conversion Ratio =11~sales a statistically valid sample
the better the CR is --the less Clicks in are needed for a reliable sample.

So, $700 at $0.40 CPC or less as your conversion rate dictates.
If you are paying CPM then whatever it takes ...
 
What I am going to say is mostly a moot point because the advice you were given is so incredibly top notch. However there is one thing that stands out to me in your original post. You say after declaring a winner you used that campaign but switched out the images with 5 others.

launched the same ad with 5 different images. Got almost no results.

Switching out your images with 5 others is effectively running 5 new untested campaigns. People are very visual creatures. Its very possible the reason you didn't have the same results is because the images you used didn't have the same impact to the audience as the first.

This is something to keep in mind in the future when applying greybeard's advice
 
bottom line you need 10+ sales to trend a test IMHO. First one to reach the threshold I would stop (or suspend) the others in the test.
5 clicks and 1 sale is not a winner that may be just *you were lucky*
Do the math: if the CR is 10% or 20% you would need a lot less clicks to reach 10+

3-10 times the reward for the sale <(however you are calculating that) the lower the amount the higher the multiple.
How about ads that are simply not getting results, when is the time to kill them ?
 
Design your own metrics.

Personally, I would also use my databases relating the user's IP to his zip code (70%-80%) accuracy. Housing costs, reported AGI income, other (US only [for now]). Geo-fencing within 2 or 3 km radius scaling geographically also. Money matters and you can get census data on the other metrics in the tracts in that zip code too.
 
IDK enough in use --if it works it works but Facebook is always acting in their own best interests. That interest is to keep you spending money.
Facebook user data that is user submitted data is tainted. Data that is learned may be of more use.

What I mean if you select;
  • age it may be 50% to 70% accurate,
  • gender maybe 90%
  • occupational claims maybe 50%?
  • and income --maybe as little as 40% accurate.
Geolocation 90%~ as much Facebook traffic is from an app (mobile is) and using GPS (tor and proxies are the wildcards)

Learned data with regard to affinity toward offers/ads is not made available to the public afik. But Facebook certainly has this ...
 
How else do you retrieve them other than asking people directly?
You can't as an individual, but using geo-location data (that is accurate) you can profile that tract (neighborhood's) age averages. You are looking for possible matching patterns at that point. You would need to do some large ad volumes --so the profit may or no be there.
 
banners
Back