Google Ads Campaign Drafts and Experiments
Do you have a Google Ads campaign that you want to test new changes without worrying that they may negatively impact your campaign’s performance? Google’s Drafts and Experiments is the right tool for you! It allows you to test how multiple changes to a campaign might impact performance without having to change the original campaign.
Currently only available for Search and Display Network campaigns in Google Ads, Drafts and Experiments allow you to set the duration and percentage of the budget split along with any the changes you want to test.
You can split traffic going to your experiment in two ways – a cookied-user split shows that user only one campaign variation regardless of how many times they search, while a search-based split randomly assigns a campaign variation for every search that user makes.
Tests can be setup to run for a set period of time or indefinitely. Don’t worry, if you’re seeing significant results before your end date, you can choose to end the experiment anytime.
At the conclusion of the experiment, you can either apply the changes to the original campaign or pause the original and convert the experiment variation into a new campaign. There are some limits to experiments such as unsupported bid strategies, but for the most part, we have found the remaining features to be very useful. For a complete list of unavailable features in experiments please refer to the list in Google’s help article.
To start your first experiment, click into an existing campaign to set up a draft. Navigate to the left menu and click Draft & Experiments, click Campaign Drafts, click the blue “Plus” circle, then save under an appropriate name. More details on campaign drafts can be found here.
Your draft will now be contained under Drafts & Experiments > Campaign Drafts.
In this draft campaign you can make any changes you want without impacting the original campaign the draft was based on. This queus up changes, but won’t take the new experiment campaign live.
Once all of your changes are set up in the draft campaign, you’ll want to start an experiment. To do this, navigate to Campaign Experiments, select the draft you want to run as an experiment, click APPLY, and select the option “Run an experiment”.
We recommend running your split test until you have reached statistically significant results (shown by a blue star in the experiment metrics, found at the top of reporting when looking at an experiment campaign).
If you wish to apply the experimental campaign to the original, you can hit APPLY, select “Apply changes to the original campaign” and the changes will apply to the original campaign. Please note that any changes made to the original campaign will not be reflected in the experiment campaign. It’s best practice to refrain from making any changes during the test period because this will make it more difficult to interpret results. We also recommend that you schedule your experiment to begin in the future to avoid any differences caused by ad approval times.
At Four15 Digital, we most commonly run experiments on landing pages, changing all URLs in the experiment to a new landing page, but we also use experiments to test large scale ad changes and even bid changes.
Recently we ran an experiment to create an A/A test in order to determine the effectiveness of Google’s testing platform and see how evenly it split traffic. An A/A test is where the experiment and control campaigns are identical for the duration of the test. We were inspired to try this because we noticed something strange: many of our experiment campaigns were seeing different impression numbers vs our original campaigns in the test, even though we set a 50% split. We often saw a 10-20% delta in impressions or clicks between the two campaigns. We were hoping this A/A test might reveal any nuances within the platform and help us determine why we were seeing this delta, but to our dismay our identical experiment and original displayed unremarkable results. Net-Net, the delta in impressions between the campaigns was miniscule.
Here our experiment campaign was split 50% with the original and used a cookie-based split. According to Google, cookie-based splits allow for a consistent user experience and “can help ensure that other factors don’t impact your results, and may give you more accurate data.” Digging into the performance comparison details, we’re given the differences compared to the original campaign with the associated statistical significance and confidence intervals. Here we can see that our conversions are off by -14% – which lies right in the middle of the -20% to -9% range that Google expects. Clicks and Cost are even more aligned at a difference around -3%. Given these results, we can assume that Google’s machine learning algorithm is automatically preferring one variation over the other during the experiment run. Whether this is the most logical decision is up to your account manager to decide because as much as we expect machine learning to eventually take over, proper decision-making requires a human touch.