It’s the age-old debate of every email marketing conversation: “when is the best time to send an email newsletter?” The answer is — there isn’t one best time. Yes, you read that right. If you want to increase email engagement rates, it’s not as simple as picking a certain day or time.
Similar to Farmers Insurance, “we know a thing or two because we’ve seen a thing or two” when it comes to email marketing. Every year, we study over 100 billion emails to curate an annual report about email marketing trends and engagement. And do you know what we’ve found? The best time to send an email newsletter varies by industry, audience, and engagement goals. There is no one-size-fits-all time to send an email newsletter.
The core of email marketing engagement is a newsletter tailored to your product, brand, and target audience. To accomplish this, it’s essential to continually test, analyze, and optimize your email campaigns. What does this look like in real-time? Let’s dig in.
Test your emails
The foundation to perfecting email engagement is testing what works and doesn’t work for your audience in every aspect. This includes testing the time of day you send, subject lines, copy, graphics, and other key elements of the email.
Note that this may be different for each audience segment, product, and type of email (i.e., feature announcement vs. welcome email) you send. It may sound overwhelming to test so many things with multiple segments, but thankfully there’s a systematic way to approach email tests that will simplify uncovering trends: A/B testing.
1. Segment your email subscriber list
To segment your subscriber list, divide your email list into smaller lists according to key characteristics, such as demographic, business type, purchase behavior, or location. Segments will allow you to see what has the most impact on each brand audience as well as provide more targeted email marketing in the future.
Ideally, your email marketing platform should have a segmentation tool that will make it easy to do. Here’s how it works on Campaign Monitor’s platform.
2. Form a hypothesis
Once you have segmented lists, it’s time to form a hypothesis, or “educated guess,” just like you would in a scientific test. To develop your hypothesis, first pick a segment of your list to focus on, then pick a single element to test that’s key for that group.
For example, you may make an educated guess about what the outcome would be of changing the time you send welcome emails. Similar to setting a goal, your hypothesis should be S.M.A.R.T. (Specific, Measurable, Achievable, Relevant, and Timebound). In this case, your hypothesis could be “sending welcome emails within 10 minutes of a user joining will increase email open rates by 6% over the next three months with the new user segment.”
3. Split each segment into an “A” and “B” test group
Now that you’ve formed your hypothesis, split the subscriber segment in two: an “A” group for your control group and a “B” group for your test group.
Split the segment equally at random to ensure the results aren’t skewed one way or the other. The easiest way to achieve random group selection is to use an email service provider (ESP) that has built-in A/B testing.
Assess if each group is large enough to provide statistically significant results to ensure the most accurate data. If the groups are too small or not varied enough, the test will be prone to just reflect the results of randomness. Whereas a larger group will increase the accuracy of results by reducing the probability of randomness.
A statistically significant group is determined by a few factors and a lot of math. If you’re not a statistician or just don’t like doing math (because who does?), you can easily find the right size by using an A/B test calculator. A good starting size is usually at least 1,000 subscribers, but again, that can be lower or higher depending on the test and the subscriber list.
4. Create “A” and “B” test assets
To test a specific aspect of your email, create two variations of the same email with just that single element changed to reflect your hypothesis.
For example, create two identical welcome emails, but send one at the time you typically send your welcome emails and one at the time reflected in your hypothesis. Following the hypothesis example above: if you typically send your welcome emails two days after the user joins, send your control email at this time. Your test group email could be sent 10 minutes after the new user joins to test the effectiveness against your baseline results from your control group.
The only thing different between the two emails should be the time you sent them. If you were to test more than one element, it is called multivariate testing. For example, a multivariate test would be if you were testing both the time the email is sent and different subject line. You should only use multivariate testing when you are testing combinations of different elements. And it’s best to implement multivariate testing only after testing each individual element.
For example, after you test and find the most effective time to send your email, you can then combine it with winning subject lines to measure the combined impact. If you attempt to test all aspects of an email at the same time, it can be difficult to determine which is contributing positively or negatively to the overall outcome.
5. Run your test on a platform that can measure results
Now it’s finally time to hit play on your test. Make sure you send your email from an ESP that has a strong analytics dashboard so you can easily measure and assess the results. Remember to isolate all variables except the one you’re testing. So if you’re testing send times, don’t write different subject lines and send on different days of the week or different times of day. Include the same subject lines in both emails, and just change the time sent.
Analyze the data
Once you’ve run your test, it’s time to assess the outcomes and determine if your hypothesis was correct or not. When testing the hypothesis above, for example, look at open rates for each email segment to measure the impact of send time. Whichever group had the highest open rate would be the “winner.”
If you’re using an ESP that has built-in A/B testing, the platform should do most of the hard work for you. For example, in Campaign Monitor’s A/B test analytics dashboard, you can view graphs of your results and conversion values all at the same time.
In addition to analyzing the results as they pertain to the individual test, assess the results in light of your overall email newsletter performance. This will allow you to gain further insights into the potential impact it could have on other email segments. For example, if a personalized subject line increased open rates with new customers, consider running the same test with other list segments.
Optimize based on the results
The data you gather and analyze will only go as far as you implement it. The key to long-term vitality is to implement the changes indicated by the test results as well as continuously iterate on them. Your audience’s needs change, your brand will likely evolve, and, as such, your email marketing campaigns need to adapt. To effectively adapt, A/B testing should be an ongoing practice.
Note that how you choose to optimize your email will have varying impacts. Therefore, it’s essential to set a clear primary goal before making changes to your email marketing. Our research has found that the best day and the perfect time to send an email is not only subjective to your industry but also to your goals.
For example, Mondays, on average, have the highest open rates, but Tuesdays have the highest click-through rate (CTR). So, if your goal is higher open rates, Monday may be a better day. But if a higher CTR is your goal, then a better bet would be Tuesday. All of this is subjective to your industry and audience, so it’s important to test this with your specific email list.
It’s also important to tailor your changes to each audience segment because, again, email optimization is largely dependent on the audience. Sweeping, universal changes to your email marketing are typically less effective. They must be personalized and tailored to each audience segment’s needs to drive the greatest impact. In fact, according to research by Accenture, 91% of consumers are more likely to shop with a brand that offers a personalized experience.
Uncover the data that will tell you the right time to send an email newsletter for your audience
Campaign Monitor is the email marketing platform built for real marketing professionals. Our email marketing analytics uncover the trends that a winning email marketing strategy is built on.
Discover the trends specific to your audience in your own Campaign Monitor dashboard. You won’t see any gimmicky email functions, cutesy monkeys, or best guesses here. Instead, you’ll get real-time data that gives you a clear direction on what your customers want and need. You won’t just find the best time to send them emails; you’ll discover what makes your audience convert.
More Stories
Why Rubies Make a Unique Alternative to Traditional Diamonds
Everything You Need to Know About the Driver License Test Process
Beyond Compliance: How an Effective OHSMS Boosts Productivity and Morale