Google Text Ads: Split Testing
Posted on September 15, 2021 (Last Updated: September 15, 2021)
In a blog series of Google Text Ads Best Practices we are diving into how you can optimize your Google Text Ads to get the best results. In this blog post we are diving into the world of split testing with Anton Hoelstad from 1260.
Split Testing (aka. A/B testing)
We all know that we are supposed to run split tests on our ad copies. Most people probably do this already, but due to the fact that you might need to wait weeks (if not months) before finding a winner, this task ends up being neglected.
Furthermore, many of the accounts I review have incorrect settings which means that the data ends up being invalid. Either that, or the setup of the split test makes it almost impossible to determine the winning ad - unless you are willing to wait many months (sometimes even years) before you can conclude an outcome. Obviously, this depends on whether you want to conclude a result yourself or if you want to outsource this job to the Google algorithm.
First of all, make sure that your ad rotation setting is set to “Do not optimize: Rotate ads indefinitely”. If you do not choose this option, and instead leave this as standard with “Optimize: Prefer best performing ads”, Google will be evaluating which ad is more likely to perform better (see https://support.google.com/google-ads/answer/112876?hl=en). However, experience tells us that when choosing this option, Google will favour the ad with higher chance of getting clicks - not conversions. They describe this as:
Ads expected to attract more clicks (and conversions if you’re using a Smart Bidding strategy)
Therefore, unless you are using Smart Bidding (which eliminates the option of picking a winner yourself), Google decides a winning ad based on CTR. Personally, I know that my clients are more interested in parameters such as conversions, conversion rate, ROAS, CPA - especially compared to CTR.
Secondly, I see many accounts where split testing is set up with individual tests per ad group or campaign. This is a great idea, but the main issue is that you will have to wait for these individual tests to gain enough data before you can pick a winner. Instead, I recommend to run split tests based on a “generic” headline 2 or 3, or alternatively, a generic description line 2.
Usually, I run my split tests based on two or three different headlines, testing ad copies with different USP's (for example “Free Shipping” vs. “24/7 Customer Service”) in all campaigns and ad groups at the same time.
The reasoning behind this is that with a generic ad copy to test, running on the entire account simultaneously, you will gather data a lot faster - and when you have enough data to decide a winner, you know which USP the potential customers prefer.
Evaluation of split testing results
When I ask people, who manage Google Ads accounts, how they decide a winner in a split test, typically the response is “gut feeling” or that they look for the best performing ad after X weeks and then decide from there. The problem with this is, that the winning ad they end up choosing might not be the proper winner, statistically speaking.
The way to do this is to calculate the statistical significance - which in short means finding the winner that has +95% of being a winner before deciding anything. You don’t need a Master’s degree in statistics to be able to do this - there are plenty of online tools that will do the job for you for free. Just Google “statistical significance calculator”, “A/B split testing calculator” or similar.
One matter you will have to take into consideration is what data you input in the tools. Normally they want you to input clicks and conversions, and calculate a winner from that result. In my opinion, you need both CTR and conversion rate in your calculation to find the absolute best ad variation.
You might have one ad with great CTR and bad conversion rate - and vice versa with another variation. The best option should be the ad with the best combination of CTR and conversion rate. But how do you determine this?
The only thing you will have to change is to switch out clicks for impressions. When doing so, you will have “conversions over impressions”, which gives you more data (because impressions will be higher than clicks) and faster results - plus it will include both parameters in the same test.
Head of Search & Partner at 1260