Why You Don't Want to Fail This Direct-Mail Test
Grow Your Business, Not Your Inbox
The following excerpt is from Robert W. Bly’s book The Direct Mail Revolution: How to Create Profitable Direct Mail Campaigns in a Digital World. Buy it now from Amazon | Barnes & Noble | Apple Books | IndieBound
In direct mail (DM), testing is the process of putting a letter or package in the mail, counting the replies, and coming to a conclusion based on the results.
Testing is a huge advantage that direct marketers have over branding and general advertisers: We first do a small test to determine whether our direct-mail package works. If it does, we can gradually expand the campaign. On the other hand, if the test bombs, we know early on that the package doesn’t work. The test costs only a few hundred or a few thousand dollars, and it saves us many more thousands of dollars by not continuing to mail a DM package that consistently loses money.
Testing is one of the central ideas of direct-mail marketing: Test small, then roll out in larger quantities once the tests show you which is the winning package.
The 3 most important factors to test
What are the three most significant factors you can test -- the ones that can have the greatest influence on response?
Number one is the mailing list. There could be a half-dozen mailing lists suitable for your offer -- or even more. You can’t assume you know which one is best based on your personal biases. The only way to know for certain which list will pull best with your package is through a test mailing.
The second most important factor to test is the price. This applies mainly to mail-order selling. For instance, let’s say you’ve published a thousand-page market-research report on broadband internet. How much will people pay for it? $195? $495? $1,200? You simply don’t know until you test. And frequently you’ll be amazed at how many people place orders at prices you think are sky-high.
The third most important factor to test is the offer. Should you try for mail orders or leads? Should you offer a premium? If you do, will you get better response offering a gift item such as a digital watch or free information such as a booklet or special report? You won’t know which works better unless you test.
A/B split tests
When two mailings or mailing factors are tested against each other, it’s called an A/B split test, with one version labeled as test cell A and the second as test cell B. For instance, you might test letter A against letter B to see which pulls more orders. Or you might take letter A and mail it to two different lists, to see which list produces the better response. Or you might mail a control as test cell A against a new test package as test cell B.
A control is the current best-performing DM package. For instance, a marketer may be mailing thousands of the same direct-mail package month after month because it’s profitable. But how do they know another package, with different graphics, size, colors, and copy, won’t generate even better results? They can’t, unless they test it. So they periodically commission a new direct-mail package or put one together in-house and then mail it against their control in an A/B split test.
There are two approaches to split testing, and each has its place. The first approach is to test two completely different DM packages and see which one is the winner. The winner beats the current control and becomes the new control. However, if the new test package is completely different from the control (different graphics, copy, price offer, guarantee, package format, etc.), you won’t know which of these elements made the difference.
The second approach is to test multiple versions of the control where just a single element is different; for example, the envelope teaser, copy theme, size of envelope, price, premium, or first-class vs. third-class postage. By testing just one variable at a time, you can determine how each factor influences response.
Number of DM Pieces per Test Cell
Statistical analysis shows that you can get a valid test result with as few as 2,000 names per cell. The validity is determined not by the number of pieces mailed but rather the number of replies received per cell.
Experience and statistical analysis indicate that 14 responses per test cell give you a fairly reliable reading of each cell’s performance. If your average response is 1 percent, then 1,400 names per test cell is an adequate size. However, because response is unpredictable, 2,000 per cell will give yourself some wiggle room.
When testing, you must be able to track response -- to identify that reply as coming from a specific mailing or from a recipient whose name was on a specific mailing list. There are several ways to do this. The simplest is to put a key code on the reply element. This code can be a series of numbers and letters in fine print tucked away in the corner of the reply card, or it can be worked into the address. Your list broker can handle this for you.
If you’re affixing or imprinting the recipient’s address on your reply card or order form, the mailing-list owner can add a key code to the order form. The same coding can be done for telephone responses. For catalogs, when you call to order, the customer service representative typically asks for a code printed in a blue or yellow box on the back cover near the recipient’s address.
After a successful test, a winning direct-mail package is “rolled out,” meaning it’s mailed to more names on the profitable lists. But can the results of a small test mailing remain statistically valid regardless of how many additional names we mail to? No. The rule of thumb is that the total quantity you mail to should be no more than 10 times the number of names you tested. Therefore, if you got a 5 percent response in a test of 5,000 names, you can mail to as many as 50,000 additional names on the list and be confident that you’ll get a similar response.