GTMStack
All templates
Framework SDR Ops Manager

Email A/B Test Framework

A framework for designing, running, and analyzing A/B tests on outbound and marketing emails.

Use this framework every time you run an email A/B test. It ensures you test one variable at a time, reach statistical significance, and actually implement the findings.

Test Planning

Fill out this section before launching any test.

FieldDetails
Test name
Date range
Variable being testedSubject line / Email length / CTA / Send time / Personalization level / Sender name
Hypothesis”We believe [change] will [expected outcome] because [reasoning]“
Primary metricOpen rate / Reply rate / Click rate / Meeting booked rate
Secondary metric
Sample size per variantMinimum 200 per variant for email tests
Test durationMinimum 5 business days
Owner

Variable Isolation Rules

Only test one variable at a time. If you change two things, you cannot attribute the result to either.

Test TypeWhat to ChangeWhat to Keep Constant
Subject lineSubject line text onlyBody, send time, sender, audience
Email lengthBody word countSubject, CTA, send time, sender, audience
CTACall-to-action text and placementSubject, body content, send time, sender
Send timeDay of week or time of daySubject, body, CTA, sender, audience
PersonalizationLevel of personalization (generic vs. custom first line)Subject, CTA, send time, sender
Sender nameFrom name (first name vs. full name vs. company)Subject, body, CTA, send time, audience

Test Design Template

Variant A (Control)

ElementContent
Subject line
Preview text
Body (paste full text)
CTA
Send time
Sender name

Variant B (Test)

ElementContent
Subject line
Preview text
Body (paste full text)
CTA
Send time
Sender name

Sample Size and Significance

Do not call a test based on gut feeling. Use these guidelines.

Total Sample Size (per variant)Minimum Detectable DifferenceConfidence Level
100~15% relative changeLow (directional only)
200~10% relative changeModerate (80% confidence)
500~5% relative changeHigh (95% confidence)
1,000+~3% relative changeVery high (99% confidence)

For outbound emails, aim for at least 200 contacts per variant. For marketing emails to larger lists, aim for 500+.

Results Tracking

MetricVariant A (Control)Variant B (Test)DifferenceSignificant?
Emails sent
Open rateY / N
Reply rateY / N
Positive reply rateY / N
Click rateY / N
Meeting booked rateY / N
Unsubscribe rateY / N

Decision Framework

ResultAction
Variant B wins with statistical significanceImplement Variant B as the new default. Document the learning.
Variant B wins but not statistically significantExtend the test if possible. If not, keep Variant A and retest with a larger sample.
No meaningful differenceKeep Variant A (simpler is better when results are tied). Test a different variable.
Variant B losesKeep Variant A. Document why the hypothesis was wrong.

Test Log

Keep a running log of all tests to build institutional knowledge.

Test #DateVariableHypothesisWinnerLiftSample SizeNotes
1
2
3
4
5

Testing Priority Order

If you are starting from scratch, test in this order. Each test builds on the previous winner.

  1. Subject lines (highest impact on open rates)
  2. Email length (short vs. long body)
  3. CTA type (question vs. statement, soft vs. direct)
  4. Send time (morning vs. afternoon, day of week)
  5. Personalization level (generic vs. one personalized line vs. fully custom)
  6. Sender name (first name only vs. full name vs. title + name)

Want the how-to behind this template?

Check out our playbooks for step-by-step process guides.

Get GTM insights delivered weekly

Join operators who get actionable playbooks, benchmarks, and product updates every week.