Ray worked with B-2-B and Consumer clients throughout the world ... including USA, Canada, Mexico, Asia, the South Pacific, Europe, the Middle-East, Central & South America, Africa.

This website is a compilation of Ray's 10 years on the Web.

 

Power Direct Marketing: The Book


The Deadly Seven Pitfalls of Segmentation

Here are seven things to avoid or be careful of when doing market segmentation:

  • Not randomly selecting samples
  • Not creating back-end measurement files
  • Too long before roll out
  • No remarketing of tested audiences
  • Universe change between test and campaign
  • No accounting for variations in creative and offers
  • No accounting for geographic differences

Let’s look at each of the seven pitfalls more closely.

Pitfall 1: Not randomly selecting samples to study

This is particularly dangerous. Sometimes in the excitement of major programs and campaigns, with the human desire to want winners, we forget that winning is not the purpose of segmentation. The purpose is to find winning groups—not make them.

A test sample must be representative of the total list. You want the results and your projections to be reliable. You want the promotion to work next time, too, when you roll out your program to the next level. And the next.

Pitfall 2: Not creating back-end measurement files

Although measurement and analysis happen after you are in the marketplace, you must plan it in the beginning. If you do not, you may forget such critical elements as match codes, which are vital to really learning what happened with your offer and your lists.

You must lay out on paper the math of your program—what you expect, your objectives, and when you’ll arrive at break even—and decide how to measure the levels of success BEFORE going into the field.

You may have several goals and thus several measures. That is fine, as long as you have them!

Plan to apply your test results to the total campaign.

Plan your measurement so you can measure your plan.

Pitfall 3: Too long before roll out

The most carefully planned and well-executed programs will not pay off if there is too long a time period between the test of the campaign and the campaign itself. Segments of the marketplace are fragile. Because they are smaller niches of the whole, they are more likely to change—quickly. Sure, sometimes that is better for your program—and many times it is not!

The file is different when you go to it a second time—when you roll out your program. It has aged, it has moved, and it has heard from your competition. It is different. Take that sure fact into consideration.

When you plan your test, plan your campaign in the beginning so you can take advantage of all opportunities. Promptly.

Pitfall 4: No remarketing of test audiences

Some feel the names you have tested should not be included in the roll out. I disagree. Almost without exception there are more on the list that did not respond than did. If the list as a whole worked, a repeat of your message to those test names is still worth a shot. A second shot . . . and maybe more.

On your second, third, and onward mailings to this audience, you may have a lesser response rate—lesser percentage of response. Because you creamed the list the first time. And, at the same time you will gain more total response than if you had not chased this list again. And again. Over and over. In most instances you will gain more by repeat contacts with the same audience than by not doing so.

When you do chase them again, however, you must take that into account when you measure results. The reason is obvious: this group heard from you before. On this second contact, it is very likely that whatever awareness and image levels you’ve previously sparked and whatever interest you’ve generated, you will reinforce.

Take into account the test group—measure it separately.

Pitfall 5: Universe change between test and campaign

Why this happens is a mystery, but it does. You carefully select and test a segmented audience. It works. And then you change the selection criteria when you roll out. And wonder why the results fail to relate!

Don’t do that. This does not mean you should not test again some other or different collection of factors. Fine. Test. But don’t expect the same results. The key word here is TEST.

When you find a profitable segment, go at it. At the same time, you can continue to test to improve results. Just don’t mix it all together and expect the same results.

Pitfall 6: No accounting for variations in creative and offers

As with all testing, you should do it—you should test! Direct response marketing is founded on and succeeds with test–test—test. As in the audience testing recommended in Pitfall 5, you should do it. When you do it, expect you will have different results. Expect it!

If you change the creative, the look, style, format, design, color, copy, anything—everything—you will get different results. Which is fine. Just make certain you know that is going to happen. And are prepared.

When you change the offer from FREE to a One-Cent-Sale, Free Trial, Limited Time Offer, or any variation, you will get different results. Expect them. And measure accordingly.

Pitfall 7: No accounting for geographic differences

Birmingham is not Spokane. Maine is not Arizona. Albuquerque or Brisbane, Rome or Singapore or Toronto, east or west, north or south—geography does affect your results. These places are different. Just as people are different, so is geography. And it does make a difference.

Weather has a major impact on how people respond. When it is sunny on the beaches in Florida and raining on the folks in Boston, it is reasonable to expect that any messages sent to your highly segmented audience in those two markets that day will be received differently. Expect that to happen, because it will.

Geography affects lifestyles. And even though you do have a niche market, that niche is not 100% homogenous every day of the year. You are dealing with people—individual people—and they are all different.


Top of This PageReturn to Previous Page

Contents by ROCKINGHAM*JUTKINS*marketing, all rights reserved.
Design by William F. Blinn Web Design, all rights reserved.