If you were to run a side by side test of two segmented email marketing lists which differed only in the sending time, and one list came back with a 2% higher open rate, would you then change the sending time for all of the chosen demographic? After all, 2% is a significant figure. If only all tests gave such positive results.
I don’t want to be negative, but the advantage has not been proved. An increase an open rate is not enough on its own to be considered a success. The returns are comparative, in this case a specific change improves one factor, but this is not conclusive of an overall advantage.
There is only one purpose for your email marketing campaign, one that should be included in all tests; completions. Everything else is peripheral. I’m not suggesting a higher open rate is of no consequence, only that if it doesn’t increase completions, it is of no value.
Let’s take a look at another metric, the click-through rate. Logically, if you make a change that increases it, there will be a greater chance of completion. You segmented your email marketing list wondering whether changing the wording of the button to ‘Bargain Time’ will grab the attention. You decided to experiment with a little bit of animation, making stars burst as the mouse went over it. No one could miss it and to prove your genius, the click-through rate increased.
Before you rush off to your boss to show just how effective you are, you should check the completion rate for both segmented lists. If it has dropped for the animated one, there is further work to do.
There are two obvious causes for the drop in completions for you to consider: you have discouraged those who would have normally clicked through or those you have attracted are not interested in the item. You have work to do. You have the positive of the spangly button generating more interest but you will have to work out what items they might buy.
No metric stands alone and all must be judged by the completion rate.