Testing Tuesday: A weekly series on how to get the most of direct-response fundraising tests
What did you learn from that test you just ran?
It's not always obvious. Let me give you an example.
Let's say you decide to test two different photos in your direct mail pack. You wonder whether a "sad" photo or a "happy" photo will do better. You have sufficient quantity. You're disciplined, so everything else in the pack is identical, except for this one photo:
Control: PHOTO A (Sad Old White Guy)
Test: PHOTO B (Happy Old White Guy)
So far so good. You wait a couple of months to get the results (or a couple of days if your test is online).
Let's say that the test, PHOTO B, Happy Old White Guy, outperforms at a statistically significant level.
Hooray! You've learned something from your donors.
But what have you learned?
You have not learned that happy photos are better than sad photos.
You've only learned that this particular happy photo in this exact context performed better than that particular sad photo.
If you are going to be sending this exact direct mail piece repeatedly, you've learned something very valuable: With the new photo, this mailing will do better from now on.
But maybe you've uncovered something beyond this one mailing. In this hypothetical case, the less-expected photo did better. Usually, "sad" photos motivate more giving than "happy" ones. If your test shows otherwise, that should get your attention! Maybe there's something you need to know.
In a case like this, you should test the same notion again, with other photos and in other contexts. It's important to know whether that first test was a fluke (that happens!) or it was pointing to a fundamental truth.
Note that photos are especially slippery test subjects. You can never test the efficacy of photos in general -- only whatever specific photo you have at hand. The specifics always trump the general.
With any test, you must know what you've really learned. And not draw over-broad conclusions That takes some careful thinking.