From 8 Credits

List migration

Seamlessly migrating your email lists to a new platform without losing data or deliverability

Moving to a new email platform should feel like an upgrade, not a risk. But without careful management, a list migration can lead to lost data, broken segments, deliverability issues, and a disrupted sending history.

List migration handles that process professionally. It ensures your contacts, tags, segments and historical data make the move intact — so you arrive on your new platform with everything you need to carry on without missing a beat.

What Is Our List migration Service

List migration is the process of moving your email contacts, segments and historical data from one email platform to another. It involves exporting data correctly from the existing platform, preparing it for import, mapping fields and segments accurately, and importing everything into the new platform without losing data, breaking segments or damaging deliverability.

Why Choose Our List migration Service

You need this when you send regular email communications but have no clear view of how they’re performing, when open and click rates have plateaued, or when you suspect your subject lines, layout, or content structure could be improved but don’t know which variable to change. A/B testing takes the guesswork out of email improvement and replaces it with evidence.

What's Included In Our List migration Service

This service includes the design and configuration of a structured A/B testing programme for a defined email element — subject lines, send times, calls to action, layout or content. It covers test design, statistical significance planning, execution and results analysis. Delivered as a testing report with findings and recommendations for ongoing application.

A bad list migration is one of the fastest ways to destroy email deliverability you've spent years building. The contacts matter. The history matters. The segment structure matters. Migration done properly is invisible. Done poorly, it's expensive to fix.

Harry Morrow, Director - We Do Your Marketing

Why We’re Different

Most marketing companies focus on channels and tactics.
We focus on reaction.

Before selecting platforms, formats, or media spend, we define how your audience thinks, feels, and decides. We use behavioural psychology to understand what will capture attention, build trust, and motivate action — then choose the channels that best support that outcome.

Every channel we use has a clear purpose, a defined role, and a measurable objective. Nothing is done “because it’s popular” or “because it’s expected”.

The result is marketing that feels natural to engage with, works across multiple channels, and is designed to deliver meaningful, long-term results.

Want to see how this approach works in practice?

Helpful resources, expert guidance, and tools to support your Marketing decisions.

No data was found
Frequently Asked Questions About List migration
We have complied a list of questions that are often asked about List migration and how it can help your business. If you can’t see the answer to a question you have, please contact us today!

A/B testing involves sending two or more variations of an email element to different portions of your list to see which performs better. The winner — determined by opens, clicks or conversions — is then sent to the remaining subscribers or used to inform future campaigns.

Subject lines, preview text, send time, from name, email length, images versus text, call to action wording, button colour, content order, personalisation and layout are all valid test variables. The best tests focus on one variable at a time.

Most A/B tests require a minimum of 1,000 to 2,000 contacts per variant to achieve statistical significance. Below this threshold, results may be directionally useful but shouldn’t be treated as definitive.

Most email A/B tests can be concluded within 24 to 48 hours, as most engagement happens in the first few hours after sending. Tests focused on conversion may need to run longer to capture delayed purchase behaviour.

A result is statistically significant when there’s sufficient evidence that the observed difference in performance is real rather than due to chance. A significance threshold of 95% is commonly used — meaning there’s a 95% probability the result would hold if the test were repeated.

Testing multiple variables simultaneously requires a multivariate test, which needs a larger list to generate meaningful results. For most businesses, testing one variable at a time is more practical and produces clearer, more actionable conclusions.

Start with the elements that have the most impact on your primary metric. Subject lines have the greatest effect on open rate; calls to action have the greatest effect on click rate. A systematic testing plan prioritises highest-impact tests first.

Not automatically. Consider whether the winning result is statistically significant, whether it applies to the full list or just the test segment, and whether there are any contextual factors that might have influenced the result. Treat test results as evidence, not absolute truth.

A structured test log — recording what was tested, the result and what will be applied going forward — is essential. Without documentation, testing becomes a series of isolated experiments rather than a cumulative body of knowledge.

Yes. The law of diminishing returns applies, but even strong performers have room for improvement. The highest-performing email programmes are those that test continuously rather than stopping when results are satisfactory.