Ray worked with B-2-B and Consumer clients throughout the world ... including USA, Canada, Mexico, Asia, the South Pacific, Europe, the Middle-East, Central & South America, Africa.

This website is a compilation of Ray's 10 years on the Web.


Power Direct Marketing: The Book

The Four Keys to Successful Analysis

It is now obvious that digging inside your program to measure its effectiveness is no easy task. Detailed analysis may be complex—sometimes even difficult. To make it easier, I’ve reduced the key parts to just 4 points:

  1. Did the program work? And what was it that did work?
  2. What did not work?
  3. Why? Why what worked did work, and why what did not work did not work?
  4. What are you going to do with all this good information next time ’round?

Let’s talk about these 4 points one-by-one.

Did the program work? And what was it that did work?

Usually, this isn’t too difficult to determine. You have the response and can tell rather quickly if the program was a success or not.

You can tell what mailing lists worked. What publications gained the most positive response. Which television advertisement generated the most orders. Which package got the most donations, and which coupon had the highest redemption rates. This part is usually fairly obvious.

Because you set up your program to measure—to count the numbers of responses, the orders over the 800 number, the sales with offer "A" versus offer "B"—you can tell what worked. This is basic measurement. It allows you to analyze your total direct marketing effort.

Learn from your direct response successes what worked.

What did not work?

Sometimes this is easy—many times it is not. Easy because in an "A" versus "B" situation, one clearly dominated over the other. There was a runaway winner. A direct mail package that garnered 2/3 or 3/4 of all the response. And it paid.

A coupon in a print ad that not only gained a healthy response against total circulation of that particular publication, but the leads were making the sales staff happy because they were closing sales off this program.

Where in another instance the same ad in another magazine or newspaper wasn’t doing anything. A flop. A dismal failure. In this example, black and white. Easy to pick the winner and the loser.

What happens when you test 2 different packages or offers and your response looks like this:

Creative package
or offer (test A)
10,000 2% 200 5% 10
Creative package
or offer (test B)
10,000 1% 100 10% 10

Let’s further assume you’re happy with 10 sales from 10,000 contacts. Which of these two would you roll out for more and which one would you toss? Again, it depends. The cost per sale is lower for "B" because you had fewer contacts to make to get the 10 sales. But it could be you’re still pleased with the 5% sales ratio of "A."

Tough decision, isn’t it? Sometimes it is not black and white. Judgment, timing, the marketplace, the competition, your sales staff size and capabilities, fulfillment services, and many other factors all enter into the decision.

In any case, no matter how you see this example, you do need to determine what does not work for you.

Here’s another example. This time it’s a 4-way mailing list test. These are the results:

Number Mailed


Percent Nixies

Leads Generated

Sales Made

List #1

5,000 635 12.70% 100 10

List #2

5,000 210 4.20% 110 10

List #3

5,000 825 16.50% 100 15

List #4

5,000 145 2.90% 80 10

*Nixie is a direct marketing term for undeliverable mail—mail that cannot be delivered because of an incomplete address or similar.

Which list or lists are the winners in this example? How do you analyze the results from this list test?

With by far the most nixies, List #3 is still the clear winner. Why? Because with 15 sales, to 10 for each of the other lists, it has to be the choice. What about the efficiency of List #4? With fewer leads, 10 sales were still made. And List #2? With more leads, only 10 sales were made. Yet these two lists had a very low and acceptable nixie rate.

Further, if you’re satisfied with 10 sales from each 5,000 mailing, then all the lists are good for your purpose—you achieved your objective. In this example the undeliverable mail just isn’t a factor—because you reach your goal of 10 sales per 5,000 contacts.

Of course, it is easy to draw the correct conclusion that the more mail delivered the better opportunity you have to get good leads from a good list—and consequently more sales. In this case, however, the list with the most nixie mail worked best.

Why? Why what worked did work, and why what did not work did not work?

Good friend Freeman Gosden, Jr., has done an entire direct marketing book on "why." Because "why" is important to continued success in direct response. You need to find out why things happen. So you can repeat those good things and eliminate the others.

It is not difficult to find out the good part of "why"—why what worked did work. How do you do this? Simply ask your customers. In a mail, telephone, or in-person interview you ask them why. They’ll tell you. Because they are your customers, they’ll tell you.

Your customers are your best source of what is going on in your marketplace. If they’ve just bought from you, it is likely they’ve shopped and know what’s being offered. They may know more than you about what is really happening out there (Hopefully not, but this is possible!).

So, ask them why they responded the way they did. What turned them on to you and your offer? To your product or service? Why you and not the competition? Why you at this time?

It’s amazing what you can learn by asking. You might offer a small gift as a thank-you for their time in helping you—or you may not. But in any case "ask and ye shall receive."

The other side of the coin is a tad harder. To get to the bottom of why what did not work did not work might take a little more effort.

The reason is you have no customers. Or, very few. If you did an offer or package or list test and you’re dealing with the losing group, you have somewhere between a few and nil customers. Tough to work with small numbers.

You can still do exactly with the "did not work" group as you did with the winners. Ask them questions as to "why not." It will take longer and will be more difficult. But it will work.

You will have more turn-offs than helpful hints—but when you’re finished you will know "why what did not work did not work."

What are you going to do with all this good information next time ’round?

Or, as Hewlett Packard asked in their advertising: "what if..."

Computers are marvelous, aren’t they? Without them the entire business world would not be the same. Or even close to it. Direct response has been blessed by their use. Because of them we are more cost-efficient, effective, selective.

With computers we have the capability to gather all sorts of data—in fact "tons" of it—and then we don’t use it!

It doesn’t make sense to devote your valuable time to gathering information and asking questions and then analyzing the results on issues that are only remotely likely to occur. Or that aren’t important—BIG issues—in the first place.

You should plan to deal only with facts and the opportunities they offer that can make a difference. Every little detail will not make a difference.

It is very important as you analyze and measure your direct response efforts that you do gather all the information you need to make the best decisions. To build your database for future use. Yes, even exploitation.

As you’re doing this information gathering, make sure it is true and useful knowledge. Not just facts and figures, numbers and words, but real knowledge about you, your products and services as your customers and prospects see them.

Make sure what you save after analysis is data that will be useful to you tomorrow, the next day, and the next, and on down the line.

How do you know what to save? I don’t know. Only with experience do you gain knowledge and decide what is truly important to you. In the beginning it is guesswork and common sense. Working with a high-tech company introducing a new product, we saved data on nearly 100 creative/media combinations. In less than five months we knew which 20 were important, and dumped the rest.

Years ago I worked with a transportation company, dividing their large list into useful segments. In this case the decision was made to make the selections too finite . . . much of it was never used. Certainly not cost effective. An overkill of a good idea, because the computer was available, the process was inexpensive, and the information was easy to pump in.

Don’t dump it in unless you truly feel you’ll soon dump it out and use it to make your next campaign even better. Think about the elements that are really important, the BIG pieces of your program where a difference can be important.

Then save and store and massage and manipulate that information. Turn it into "knowledge"—something beneficial to your company that can be used to increase your share and profits in the marketplace.

Top of This PageReturn to Previous Page

Contents by ROCKINGHAM*JUTKINS*marketing, all rights reserved.
Design by William F. Blinn Web Design, all rights reserved.