A/B split testing is the only true path to marketing genius!

A/B split testing is the only true path to marketing genius!

We’re all idiots.  I’m an idiot — and you’re an idiot, too!

“How dare you, Roy?!” you say!

“How dare you insult my…”

Hold on.

I get it.

Nobody likes being called an idiot.  But let me give you some context before you just blow me off.

We’re all idiots when it comes to knowing how people will respond to our marketing.

We have experience.  We have best practices.  We have hunches and we have guesses about what to do for our next campaign.

And yet the moment we have it all figured out, we run an ad and it doesn’t work.

Not, “doesn’t work” in a copy review or boardroom setting.

Rather, “doesn’t work” in actually getting our market to take action.

The same is true with investing.  The moment we think we have it figured out, the market proves us wrong!

The good thing about marketing is that we have the ability to run tests.  Sometimes called “split tests” or “A/B tests.”

A marketing test compares the performance of two or more different versions of marketing creative, to see which one performs better.

We can test safe ideas.  We can also test wild ideas.  We can test this way and that way, and tally the results.

When we have the data there in front of us, we can look at it, and say “Version A converted at 6.3% and Version B converted at 11.4%, and we can predict that Version B will continue to outperform Version A with 97% certainty.”

We’re idiots, but this kind of data — from testing — can turn us into marketing geniuses.

And once we have this data on a limited, controlled split test…

We can run the most profitable ad, every time!

I’ve been thinking about this today because I realized I didn’t have any kind of ad or banner for my book, The Copywriter’s Guide To Getting Paid, on the Breakthrough Marketing Secrets website.  And yet I get thousands of visitors to the site — many of whom are copywriters — who might be interested in paying shipping to get the book free.

So I decided to finally do something about it.

I’d actually run a test on Facebook a while back — getting click through data on three images of me holding the book.

I picked the one that performed best there.  My assistant — who has some graphic design background — created a couple different versions of this image, adding calls to action into the image itself.

And I’m using the WordPress AdRotate plugin to rotate the different versions in a couple different areas across my website.

As I gather data, I’ll use a split test calculator to compare click-through rates of the different ad versions, to figure out if there’s a statistically significant probability that my best performing ad will continue to be so.

And once I have a best-performing ad from my first test, I’ll have a “control” against which I can test other ads.

A few thoughts about my test — before I get into why this is so important for you…

I’m going to point out a few things I’m doing wrong in this first test, because 1) if I don’t, someone is going to do it for me, and 2) so you can avoid the same mistakes in your own test, or at least make them knowingly.

My first and biggest mistake is only tracking click data.

It’s mostly because I’m starting my testing with a low-rent testing solution, rather than a sophisticated testing platform.

The ideal test wouldn’t just track clicks, it would track conversions.  Because one ad may lead to higher click-through rates, but lower revenue.  Although I’m a big testing advocate with clients, I’ve been a bit of a hypocrite on my own site, so this first test is an attempt to rectify that.

When I upgrade to a better testing platform, I can and will track all the way through to revenue generated.

Also it’s worth noting that my numbers on Breakthrough Marketing Secrets are good, but not HUGE.  There will always be more clicks on a banner than actual sales generated.  I can get statistical confirmation of my test results faster when I’m comparing clicks instead of sales.   And so that’s another contributing factor in me making this “mistake” on purpose.

My second major mistake — and I don’t know if you would have caught it because I’m not showing the ads here — is that I’m testing two ads that look very similar.

They are visibly different when you look at them.  But 95% or so of the ad visual is identical.  They both use the same picture.  They both are identical for about 50% of the added copy and graphical elements.  It’s only one little area of the ads that’s different — but noticeably so.

An ideal test will compare BIG differences, not small ones.  For example, the big direct mail publishers would hire two different copywriters to work with two different designers to test two totally different direct mail packages against each other.  That’s how they ran their most important tests.

When you think of testing, think about it this way…  Little variations in what you’re testing will create little variations in response.  Big variations in what you’re testing will create big variations in response.

If you want to find an ad that performs 10% better, test 10% variations in what you do.  If you want to find an ad that performs 10X better, test something that’s totally different than what you’ve ever done before.

This brings me around to what I really want to share about A/B split testing best practices…

I actually wrote a book on marketing testing way back in 2007-2008 with world-famous copywriter Bob Bly.

It was called The Taguchi Testing Handbook, and Bob said my book was the best he’d ever gotten from a hired writer.

The book itself was NOT a commercial success.  The Taguchi statistical model is way too complex for most marketers to implement.  And it requires a lot of traffic to pull off.

Today, the book is totally outdated, tech-wise.  The spreadsheet it used to do all the statistics is timeless.  But the tool it recommended — Google Website Optimizer — has since been replaced.  You could still use it, but you’d have to figure out the tech side on your own.

But in that book was a chapter about testing input design, which is — by far — the most important and profitable skill.

In fact, in the right hands that chapter would be worth 100X, 1000X, or more the $99 cost of the book.

When you’re planning your split test — with the goal of finding an ad version that will multiply your response many times over (and not just add an incremental bump) — there are three important rules.

Rule 1: Test radically different ideas.

I spoke to this before, so I won’t go into much more detail.  But the bigger the differences are between test conditions, the more likely it is you’ll find one that performs at 2X, 5X, even 10X or more compared to everything else.

And I’ll add, sometimes it’s really important to test things that “break the rules” of your industry or marketing style — these are often big flops, but they can also be HUGE winners.

Rule 2: Start with direct marketing best practices.

Even back when Claude Hopkins wrote Scientific Advertising, rules were starting to be established for what effective advertising contained.

A headline.  A pitch.  An offer.  A deadline.  A call to action.

And so on.

Until you’ve established a profitable control ad, test ads that follow the rules of direct marketing best practices first.

Rule 3: Test high-probability areas.

The big rule is that if you can’t see the difference in the first couple seconds looking at the ad, it won’t likely make a big difference in response.  This is why the vast majority of direct mail tests are cover tests for magalogs, or envelope and headline test for other direct mail pieces.  It’s what gets noticed first, so it has the biggest impact.

In all advertising, it’s worth considering testing the headline, the lead, any big and obvious graphics, and similarly instantly-identifiable elements.

The one exception here is offer testing.  Because the offer is often as influential in generating response as copy is, an offer test is usually smart.  Test price, guarantee presentation (and length), payments, shipping cost, and other critical elements of the offer.

No matter what, remember this…

Since it’s press time and I’m still finishing, I’ll leave you with this thought.

If you don’t test, you’re being a marketing idiot.  You’re running things and making decisions based on things that don’t really matter in the end.

If you do test, you’re being a marketing genius.  You’re figuring out which ad will be most profitable or perform best, and running that.

It really comes down to that.

And today, I’m ushering in a new area where I’m less of a marketing idiot when it comes to what I’m doing on Breakthrough Marketing Secrets!

Yours for bigger breakthroughs,

Roy Furr