Blog > Facebook > The Ultimate Guide to Split Testing Facebook Ads

The Ultimate Guide to Split Testing Facebook Ads

James Parsons • Updated on November 20, 2021
Written by ContentPowered.com

Split Testing Facebook Ads

Until you know what it’s all about, split testing – also known as A/B testing – sounds like a meaningless marketing buzzword. It’s the kind of thing cheesy consultants charge $500 an hour to talk about, with no actionable advice at the end.

Then you read up on it, and you realize that no, split testing is just about the most effective single experiment you can perform. It’s highly scientific when done properly, and it virtually always results in an increase in conversions, ROI or whatever other metric you’re testing to improve.

How Split Testing Works

Split testing is an incredibly simple idea. It works like this. You start off with an audience of, say, 1,000 people. You’re running an advertisement and you’re getting 50 signups per month with that ad.

To split test, you take your audience and divide it into two segments, each containing 500 people. To one audience, you continue running your current ad. You can expect this will result in 25 signups monthly; 50% of the audience, 50% of the signups.

For the other group of 500, you run a variant of the ad. You might change the copy. You might change the image. You might change the landing page. The point is, you’re changing one – and only one – variable. Never change more than one variable at a time; you won’t know which change was effective.

After one month, your original ad should have brought in 25 signups. Compare that to your new variant. If it brought in 20 signups, you know the change was bad. If it brought in 30, you know the change was good. When you find a good change, you implement it for your main ad and start a new split test, testing a different variable.

That, of course, is the most simplistic way to look at it. Very rarely will a business run a split test with half of their audience for a full month. Tests are typically shorter, on the scale of days or weeks, because you don’t want to sink a lot of time into an ineffective variant.

You also might run multiple tests at once. You take your audience of 1000, put 500 of them on your main ad, and divide the remaining 500 into five 100-person segments, each with a variant of the ad.

Split Testing on Facebook

Pointing at a Graph

Facebook is a marketer’s dream with all of the ways you can split test your ads and audience. You’re not limited to just dividing your ads up by a pure numerical standpoint. Instead, you can split up your audience along demographic or interest targeting lines, giving you more options to test. Here are ways you can test Facebook ads.

Testing copy. Facebook ads, sidebar ads in particular, give you a very limited space for your copy. You have less space than the average tweet to make your case, and you have all the infinite variations of the English language to do it. Copy is where you’re going to be spending the majority of your time testing, at least on a low scale. You might be surprised at how much a minor change can affect your conversion rates.

To test copy, create two ads with identical targeting, budgets and images, and change up the copy. Swap out words for synonyms. Swap out CTAs. Try using a different tone, more casual or more stuffy. Even just minor punctuation can have an effect.

For promoted posts, you have more space to make your changes, which gives you even more options for split testing. On the other hand, you’re limited in time; Facebook’s timeliness factors in showing your posts to people, even when they’re promoted. Even perfect copy and a huge budget won’t save a months-old post.

Testing images. Here, you again have incredible potential for variation. Anything that can be an image can be tested. You need to make sure you fall within Facebook’s image guidelines, including the 20% text rule, the rules on political or sexual issues, the rules on shocking content, the rules on sexual content and the rules on functionality.

Even within those limited, you have a lot of leeway. You want your image to be a smiling woman, that’s fine. How many millions of pictures of smiling women exist for you to use? You can spend an incredibly long time testing through just that subject, let alone all the other potential subjects.

Testing destination. In this case, you run two identical ads in both copy and image, with the same budget and same targeting factors, but you change the landing page. Maybe on one of them, you send the user to a compelling post. Maybe on another, you send them to a video landing page. A third might send them off-site to your website landing page, which is a whole other avenue for testing.

You need to make sure, whatever your destination, that it matches the promises made in your ad. You can’t run an ad about shoes and link them to a post selling umbrellas; it just doesn’t work. Anyone interested in umbrellas won’t click your ad, and anyone clicking your ad to see shoes will be disappointed and leave.

Testing budget formats. Facebook allows you to use CPC (cost per click), CPM (cost per thousand views), or Optimized CPM. Testing these is a matter of running 2-3 identical ads, all with the same copy, image, landing page, targeting and everything else, just choosing a different budget structure. Some will burn through your budgets, others will leave you money left over. There’s no way to tell from the outset which will work best, and you need to re-test each time you make a major change in your campaigns.

Testing by demographic or interest. This is where Facebook’s unique targeting comes into play. Facebook records a ton of information relating to interests and demographics, both from the platform itself and purchased from third parties.

Like budget formats and destination, you need to leave every aspect of your ads the same, changing only your targeting. This will be a little more complex, however. Unlike the nice, clean 500/500 example in the beginning of this post, interest targeting will end up giving you segments of your audience with varying sizes. You’ll need to perform an extra step when comparing these tests.

That extra step is performing the basic calculations at the end to get you the precise cost per metric. For example, if you run two ads, one with 1,000 people and one with 2,000, and you end up with 100 conversions from one and 150 from the other, you have to make two calculations. For the audience of 1,000, you have 1 in 10 people converting. For the audience of 2000, you have 1 in 13 people converting. Even though the 2,000 audience gave you more raw conversions, it had a lower conversion rate, thus making the other ad the better one. Once you factor in the cost per view or click, you can see which is more expensive and thus which is better to run.

Comments

  1. Lani Bellinder

    says:

    How long should one typically wait before split testing another ad group?

    • Boostlikes

      says:

      I would wait at least a week or two I’d say. You’ll know pretty quickly if one group is working for you or not. The more data you have, the better (conversion tracking, Google Analytics, etc).

    • Boostlikes

      says:

      I would wait at least a week or two I’d say. You’ll know pretty quickly if one group is working for you or not. The more data you have, the better (conversion tracking, Google Analytics, etc).

    • Boostlikes

      says:

      I would wait at least a week or two I’d say. You’ll know pretty quickly if one group is working for you or not. The more data you have, the better (conversion tracking, Google Analytics, etc).

    • Boostlikes

      says:

      I would wait at least a week or two I’d say. You’ll know pretty quickly if one group is working for you or not. The more data you have, the better (conversion tracking, Google Analytics, etc).

Leave a Reply