The Most Active and Friendliest
Affiliate Marketing Community Online!

“AdsEmpire”/  Direct Affiliate

EPC Ranking algo

GorillaLeadz

Active Member
Hey all,

So me and a developer were thinking for our comparing site's to set up a EPC ranking algorithm which auto tests our offers with the traffic we buy at Google Ads.

However we had some questions... We see some companies like Redtrack use EPC Ranking algos, but based on what kind of stats should it push a offer up or down... Of Course on EPC... but how much traffic should we give a offer? And should it continuously check each hour the EPC of a offer and rank it based on that?

We don't like wasting traffic (money) by letting the algorithm do to much testing and not give a offer enough time to show if it can generate a good EPC or not.

Does anyone has any experience and is willing to share some with us?
 
how much traffic should we give a offer

This has been a dramatically over discussed topic in many conversations going back decades. Not so much with a bot or an algo, but essentially how to determine how much traffic to test for optimizing or how much to spend before optimizing. It is always a case by case basis.

Generally, there is no formula for how much to spend. I know there are an abundance of marketers out there that will tell you differently, but I can tell you that every smart marketer will tell you spend whatever it takes on your test campaigns until you have enough significant data returned to tell you whether or not each campaign has promise.

This is not an area that can be formulated as offers have different values, traffic sources have different costs, and day parting, geos, and demographics will be a factor that an algo or bot cannot properly or accurately predict or for which to make an adjustment.

I have two very close friends doing mid 7 Figures a year each, and they still require a hands on approach by their teams to manage the testing of new campaigns.
 
Well we know for sure with our traffic, that we always make a conversion within 20 unique clicks (visitors) but sometimes a offer goes down in CVR % and therefore also in its EPC (reasons for this might be scrubbing or something else... perhaps the weather who knows) but you don't want to calibrate the system in a way that it triggers a re-test of the offer and therefore wasting traffic... I guess this a bit of hard thing to decide on, on which moment the algo should re-test the offer and when not to...

i guess we should just go step by step and keep a close eye on the stats, and at some point slowly add more checks
 
again Standard Deviation with a sample of a few hundred clicks 0.05 good 8.9 bad the more deviation the greater the risk.

Retest factors? got some?
Time of day
Climate (if i was selling snow shovels or bathing suits)
work days v weekends
Is GEO a factor are you bidding adwords by state or zipcode.
Are you just referring the leads to someone else's form or call center or are you selling the leads you gather yourself?

I have databases of GEO-economic and segmented-usage of the US population I compiled to use. Problem is if you are an affiliate you really get nothing on the customers marketed to and sold to construct reliable or usable personas.



Determining a scrub cycle is not going to happen by TOD (time of day) perhaps by VISA-Net GEO Divisions?
Shaves can be random but are fixed sometimes --yes I have encountered people that stupid!

You have to develop metrics to even start building an algorithmView attachment 15848

View attachment 15849


View attachment 15850
We generate as affiliate dating leads, so i'm pretty sure we will be scrubbed, but yeah... the question is how to check that, as just signing on with a different IP is most often not the trick to find out if they are scrubbing... I guess asking random people to sign up from time to time (locals/friends) should do the trick... Of course informing the advertiser about these checks will be needed, but that will be done after the test of course... This should decrease the trigger happiness of the advertiser to trigger the scrub.
 
again Standard Deviation with a sample of a few hundred clicks 0.05 good 8.9 bad the more deviation the greater the risk.

Retest factors? got some?
Time of day
Climate (if i was selling snow shovels or bathing suits)
work days v weekends
Is GEO a factor are you bidding adwords by state or zipcode.
Are you just referring the leads to someone else's form or call center or are you selling the leads you gather yourself?

I have databases of GEO-economic and segmented-usage of the US population I compiled to use. Problem is if you are an affiliate you really get nothing on the customers marketed to and sold to construct reliable or usable personas.



Determining a scrub cycle is not going to happen by TOD (time of day) perhaps by VISA-Net GEO Divisions?
Shaves can be random but are fixed sometimes --yes I have encountered people that stupid!

You have to develop metrics to even start building an algorithm
working-poor.png


may2016Traffic-cities-count.jpg



where-the-money-is.jpg
 
You are wasting your time trying to second guess a scrub that you have limited or no data on.
I am talking about a Merchant account scrub above.

If the affiliate network is declining your leads then that is another thing. Scrub? --like 5 leads per hour or something similar? Why, if it's about spend then you have no logic --they don't tell you that. Acquiring that, if possible, will cost a LOT.

Coding a server side script to decline incoming leads should be applied variably if developers are not IT Doorknobs :D
 
MI
Back