Most cold email programmes operate on a set of assumptions about what works — assumptions formed by reading the same blog posts, attending the same webinars, and following the same playbooks as every competitor in the space. Without systematic testing, there is no mechanism to separate what actually works for your specific product, market, and prospect profile from what someone else found to work in a different context.
A/B testing in cold email is not about finding universal truths about what works. It is about finding what works for your specific situation. The only way to discover that is to run controlled tests with enough volume to generate statistically meaningful results.
What to Test in Cold Email
- Subject lines. The highest-leverage test in cold email. A 10 percentage point difference in open rate between two subject line formulations will have downstream effects on every metric in your campaign. Test subject line formulas, not just specific subject lines.
- Opening lines. The first sentence after the subject line determines whether the rest of the email gets read. Test different opening line approaches: problem-lead versus observation-lead versus reference-lead.
- Value proposition formulation. Test different ways of expressing the same outcome: "reduces time-to-hire" versus "cuts your recruiting cycle in half" versus "[named customer] went from 45 to 28 days." Same claim, different formulations, measurably different performance.
- CTA structure. Test a direct calendar link versus a yes/no question versus a resource offer as the primary CTA. The results often counter the sender's intuition about what should work.
The Statistical Validity Problem
A common testing mistake: declaring a winner after 50 sends per variant. At that volume, the apparent differences between variants are usually noise. Cold email A/B tests need at least 200-300 sends per variant to produce results that are more signal than chance — which means testing requires patience and sufficient list volume to be meaningful.