Creativity to Performance: The Basics of KPI-Driven Creative Testing

By Itai Kafri

It’s no secret that here at Bidalgo, we think that the importance of ad creative cannot be understated. It’s the one constant which advertisers control while all other aspects of campaign management get automated. It’s also the thing that the users see, the culmination of every campaign, the point where an ad is deemed effective or ineffective.

Coincidentally, ad creative is the piece of the advertising puzzle considered by many in the industry to be the most impenetrable, driven by amorphous forces such as creativity and esthetics. To some extent, this is true. A new ad creative is always a bet, but it doesn’t have to be a blind bet. 

What we’re saying here isn’t strictly controversial, but it’s worth repeating: if you’re engaged in online marketing, there’s no excuse not to test your assets and find out as much as possible about their potential before you bet on their success.

What Do We Test When We Test Creative?

This deceptively-simplistic question is quite complex, mostly because there’s no one right answer. In an ideal world, where you have unlimited time and an unlimited budget, you test each asset’s compatibility to the goal you’re setting.

Even then, there’s the task of picking your goal. Do you focus on upper funnel metrics, that are very directly driven by the ad creative, or do you focus on the lower funnel metrics because they are the real objective? When going downstream – different apps have different monetization models. Monetizing through subscriptions? Test your Cost Per Subscription for each asset. Want in-app purchases? Test against this goal. 

But few of us have unlimited budgets, and none have unlimited time. Compromises are inevitable. Consider your funnel a marathon – you can measure the performance at the end, yes, but it will be much quicker to measure the performance half-way and extrapolate from there. 

So if you’re optimizing for a mid-funnel event, there are several prerequisites – a click on the ad, an install, completion of the onboarding flow. The sooner you measure, the less accurate your results will be. But on the flip side, you will achieve a critical mass of results required to reach a conclusion more quickly, and with a smaller budget. 

Similarly, if all you’re doing is testing – you consider doing it in a cheaper market than the target one. Yes, the users you will get might not be the users that you want, but there can be a significant difference in the cost of experimentation. 

We would be remiss if we didn’t mention that at Bidalgo, our Creative Center can automatically estimate your asset’s chance of success even before you spend a single dollar on them. This AI-driven solution, dubbed “Predictive Rank”, analyzes the asset’s creative DNA (elements and color palettes) and compares them to the performance of your existing, similar assets. 

Picking The Right Tool for The Job

The creation of a testing protocol should depend not only on your budget and the timeframe in which you want to get results but also on the creative assets’ nature. Consider differentiating between exploratory and iterative testing.

Exploratory testing can help you understand whether an entirely new concept – be it a creative idea or even a new format – is the right choice for you. This freeform way of testing answers the question, “Is the thing I’m testing good enough for my KPIs”? 

Iterative testing is chiseling an existing concept to optimize it as much as possible while picking it apart to see which exact elements are responsible for meaningful changes to your KPIs. 

If you have the resources, both types of tests should be running at the same time. Due to the unfortunate inevitability of creative fatigue, you should always be optimizing your existing concepts and trying to find the next big thing

Throughout, don’t forget that you shouldn’t enslave your branding to tests. Consult your brand marketing experts before testing concepts that seem off, and listen to what they have to say. A short burst of KPI improvements at the expense of your brand taking a hit is rarely worth it. 

How We Test

“Equal Opportunity” Testing can work well both for exploratory and iterative tests. It can also be used to pit old against new assets. Using this method, you give every asset an equal chance to succeed, either waiting until it reaches a particular spend or a certain number of installs. The best ones are crowned winners. 

While this is a reliable, easy-to-understand approach, it’s not necessarily cost-effective or particularly speedy. A/B testing (also known as “split testing”) is a subset of this kind of test, which sees several versions with one difference between them pitted against each other.

Machine-Assisted Testing can make it easier to test your assets by using AI to “understand” which are the best before they hit their target and shift budget to them. At Bidalgo, our AI automation was  built to do this exact thing, and can be used to test multiple iterations of a creative concept cost-effectively, without user intervention.

Facebook’s Campaign Budget Optimization mechanism can be used as a testing tool, as well. Google’s App Campaigns has built-in testing mechanisms of their own, pairing creative assets dynamically to create apps.

Ideally, you should test creative even before it’s uploaded to an ad, in order to make the pool of  tested assets smaller. After all, as you might have already discovered, only a small percentage of your assets will scale successfully for a large activity. 

When you have similar assets, rather than drastically different concepts, KPIs, or when testing mature assets versus new ones. This is simply because mature assets have much more analytical data around them, making them more likely to outperform a new idea without adequate time and budget. 

New concepts might require several tests to account for several variables that can influence performance. For example, an asset that includes both image and text can first be tested to pick the best-performing image, then tested again to find the best text. In videos, you might want to test the thumbnail after deciding which video concept you’re going with. 

Obviously, the job of generating iterative assets can itself be a hassle. At Bidalgo, we’ve automated the process as part of our Creative Center and call it Creative Auto-Production. This mechanism enables us to rapidly generate iterations based on the concepts our clients provide. 

General Advice

The best A/B tests have one difference between versions A and B. The more different your versions are, the less understanding you will have of the reasons behind your results. This includes both the assets, the KPIs, the budget, the time allocated to each test, and everything else.

Not all results are equally significant. The fact that one asset achieved a CTR that’s 1% better than another asset in a 100-impression test is far less significant than a 1% difference with 10,000 impressions. You should use techniques such as the Chi-squared test to deduce the significance of your results.

Plan, then act. Don’t just bundle assets “to see which one is best”. Define the KPI according to what the asset will have to accomplish if it graduates your testing framework, and build everything around it. 

Don’t guess. If your tests include logical leaps, go back to the planning board. Extrapolating is fine, approximating is sometimes the only option, but you have to have a direct path between what was tested, the result and the conclusion you’re reaching based on it. 

Published on September 15, 2020
Written by
Itai Kafri

Director of Product Growth @ Bidalgo.

Get Started With Bidalgo Today