FirstSales Logo
FeaturesCase StudiesAboutWhy FirstSalesExamplesPricingBlog

A/B Testing

Testing two versions of an email, subject line, or landing page to see which performs better.

Home

/

Glossary

/

A/B Testing

What is A/B Testing?

A/B testing (also called split testing) compares two versions of something to determine which performs better.

In sales and marketing, you create two variations—Version A (the control) and Version B (the challenger)—and show them to similar audiences. The version that achieves higher metrics wins.

Common A/B Tests in Sales:

  • Subject lines (e.g., "Quick question" vs. "Meeting request")
  • Email opening hooks (personalized vs. generic)
  • Call-to-action phrasing (interest-based vs. direct ask)
  • Send times (Tuesday morning vs. Thursday afternoon)
  • Email length (short vs. long)
  • Sender names (sales rep vs. founder)
The key is testing one variable at a time. If you change the subject line AND the opening, you won't know which caused performance differences.


Why A/B Testing Matters

Most sales teams optimize based on gut feelings. Top teams optimize based on data.

A/B testing replaces opinions with evidence. Instead of debating whether question-based subject lines outperform statement-based ones, you run a test and know.

Compound Impact of Incremental Wins:

A 10% improvement in open rate + 10% improvement in reply rate + 10% improvement in meeting booking rate = 33% total improvement in pipeline generated.

Small test wins compound into massive results.

Learning What Actually Resonates:

Your prospects tell you what they prefer through their actions. A/B testing is the mechanism for listening.

You might discover:

  • Your audience responds better to casual tone vs. formal
  • Personalized references to company news double response rates
  • Question-based subject lines outperform curiosity gaps
  • Shorter emails (under 75 words) get more replies
These insights inform all future outreach, not just the tested campaign.


A/B Testing Methodology

1. Formulate a Hypothesis

Start with a specific, testable prediction.

Bad Hypothesis: "A better subject line will improve performance."
Good Hypothesis: "Question-based subject lines will achieve 20% higher open rates than statement-based subject lines for VP-level prospects."

2. Choose What to Test

High-Impact Testing Priorities:

PriorityElementPotential ImpactTest Difficulty
1Subject lineHigh (25-40% open rate variance)Low
2Opening hookHigh (first 3 seconds matter most)Medium
3Call-to-actionHigh (response rate impact)Low
4Email lengthMediumLow
5Personalization depthMediumHigh
6Send timeLow (5-15% variance)Medium
7Sender nameLowMedium

Start with high-impact, low-difficulty tests.

3. Create Variations

Version A (Control): Your current best-performing version

Version B (Challenger): One specific change

For example, testing subject lines:

  • A: "Quick question about sales strategy"
  • B: "Saw your Series B announcement"
Only the subject line differs. Everything else stays identical.

4. Split Your Audience

For valid results, send to similar audiences.

Valid Split:

  • First 100 prospects get Version A
  • Next 100 prospects get Version B
  • Both groups from same ICP segment
Invalid Split:
  • Version A to VP of Sales prospects
  • Version B to Marketing Director prospects
  • Different roles = different baseline response rates

5. Determine Sample Size

Too small = inconclusive. Too large = wasted time.

Minimum Sample Sizes:

Email TypeMinimum Sends Per VersionReasonable Test Size
Cold email100200-500
Follow-up100200-500
Nurture email5001,000-2,000
Marketing email1,0002,000-5,000

For cold email, 200-500 sends per version gives you statistically significant results.

6. Measure Results

Primary Metrics:

  • Open rate (did subject line work?)
  • Reply rate (did email resonate?)
  • Positive reply rate (did they want to continue?)
  • Meeting booked rate (ultimate conversion)
Run tests for minimum 2 weeks. Some responses arrive days later.

7. Declare a Winner

Version B wins if it:

  • Achieves statistically significant improvement (typically +15% or more)
  • Maintains performance across audience segments
  • Outperforms on your primary metric
If results are inconclusive (<10% difference), run a larger test or test something else.


What to A/B Test in Sales

Subject Lines

Test Types:

ApproachExampleWhen to Use
Question"Quick question about [topic]?"Low-pressure first contact
Curiosity"Noticed something about [company]"After researching prospect
Direct value"[Specific result] in [timeframe]"Clear value proposition
Personalized"[Name], saw your [specific news]"Strong signal found
Name-drop"[Mutual connection] suggested I reach out"Genuine connection exists

High-Performing Subject Line Patterns:

  • "Quick question about [specific topic]" - 32% average open rate
  • "Saw your [specific company news]" - 29% average open rate
  • "[Specific result] in [timeframe]" - 27% average open rate
  • "[Mutual connection] suggested I reach out" - 38% average open rate

Email Opening Hooks

Test:

  • Company-specific observation vs. generic opener
  • Pain-focused vs. benefit-focused
  • Short punchy opening vs. longer contextual opening
Example:
  • A: "I hope this email finds you well" (generic, low performance)
  • B: "You posted 3 SDR roles in 2 weeks" (specific, high performance)

Call-to-Action

Test:

  • Question-based vs. statement-based
  • Low-friction vs. direct ask
  • Specific time vs. open-ended
Example:
  • A: "Let me know if you're interested" (passive, unclear)
  • B: "Are you free Thursday at 2pm for 15 minutes?" (specific, easy yes/no)

Email Length

Test:
- Under 50 words vs. 50-125 words vs. 125-200 words

Data shows:

  • <50 words: 5.2% reply rate
  • 50-75 words: 4.8% reply rate
  • 75-125 words: 3.9% reply rate
  • 125+ words: <2% reply rate
Shorter consistently outperforms longer.

Personalization Depth

Test Levels:

  • None (generic blast)
  • Basic ({{firstName}})
  • Company-level (reference company news)
  • Role-based (role-specific pain point)
  • Trigger-based (specific event reference)
Trigger-based personalization typically achieves 5-8% reply rates vs. 1-2% for generic.


A/B Testing Benchmarks

Statistical Significance

For cold email tests, aim for:

Confidence LevelInterpretation
<90%Inconclusive - test larger sample
90-95%Likely winner - proceed with caution
95%+Clear winner - roll out broadly

Minimum Detectable Effect

With 200 sends per version, you can reliably detect:

Lift RequiredConfidence
<15% improvementNeed larger sample
15-25% improvementGood confidence
25%+ improvementHigh confidence

Common Test Results

Test TypeAverage Winning LiftFrequency of Clear Winner
Subject line15-35%70% of tests
Opening hook20-40%60% of tests
CTA15-25%55% of tests
Email length25-50%80% of tests
Send time5-15%40% of tests

Email length and subject lines most frequently produce clear winners.


A/B Testing Best Practices

Do's

Test One Variable at a Time
Changing multiple elements confounds results. Test subject line OR opening hook, not both simultaneously.

Test Similar Audiences
Segment by ICP, then test within segments. VP of Sales at tech companies shouldn't be compared to Marketing Directors at retail.

Reach Statistical Significance
Don't declare winners after 20 sends. Run tests long enough for reliable data.

Document Hypotheses and Results
Maintain a test log. You'll forget what you tested and why without documentation.

Iterate on Winners
If Version B wins by 20%, create Version C testing another variation on that theme.

Don'ts

Don't Test Too Small Samples
<50 sends per version produces unreliable data. Minimum 100, ideally 200+.

Don't Stop Tests Too Early
Weekends produce different response patterns. Run tests minimum 2 weeks.

Don't Ignore Segment Differences
A subject line winning with SDRs might fail with VPs. Segment before testing.

Don't Test Everything at Once
Focus on high-impact elements first (subject lines, CTAs). Save send-time testing for later.

Don't Assume One Winner for All
Your winning subject line for cold emails might bomb for warm follow-ups. Context matters.


A/B Testing Tools

Email Platforms with Built-in Testing:

  • Firstsales.io - Automatic A/B testing on sequences
  • Mailchimp - Advanced split testing for marketing emails
  • HubSpot - Marketing email A/B testing
  • Outreach - Sales engagement testing
Manual Testing:
Split lists evenly and track results in spreadsheet. Less elegant but works for any platform.

Analytics:
Most CRMs track open rates, reply rates, and conversion. Export data for deeper analysis.


Common A/B Testing Mistakes

Short-Circuiting Sample Size:
Sending 20 emails of each version and declaring a winner. Results will be noise, not signal.

Testing Incompatible Audiences:
Comparing response rates from prospects in different industries, roles, or company sizes. Apples-to-oranges comparisons produce misleading data.

Changing Multiple Variables:
Testing a new subject line AND new opening simultaneously. You won't know which drove performance differences.

Stopping Tests Prematurely:
Declaring a winner after 3 days when some prospects respond weeks later. Run minimum 2 weeks.

Ignoring Statistical Significance:
Acting on 5% differences that could be random noise. Look for 15%+ lifts for confidence.

Testing Without Hypothesis:
Randomly trying variations without specific predictions. You won't learn what actually works.

One-and-Done Testing:
Running one test and never testing again. Markets change. What worked last quarter might not work this quarter.


Advanced A/B Testing

Multivariate Testing

Advanced testing of multiple variables simultaneously. Requires larger sample sizes but uncovers interaction effects.

Example: Testing subject line × opening hook combinations:

  • Question subject + personalized opening
  • Question subject + generic opening
  • Statement subject + personalized opening
  • Statement subject + generic opening
Requires 4× sample size but reveals combinations that outperform individual elements.

Sequential Testing

Instead of A vs. B simultaneously, test sequentially:

  • Week 1: Send Version A to 200 prospects
  • Week 2: Send Version B to 200 prospects
  • Compare results
Less statistically pure but works when simultaneous testing isn't feasible.

Bayesian Testing

Advanced statistical approach that updates probability of winning as data arrives. Allows faster decisions with smaller samples.

Requires specialized tools or statistical expertise.


Key Takeaways

  • A/B testing replaces opinions with data—test hypotheses, don't guess
  • Test one variable at a time for clear results
  • Minimum 100-200 sends per version for statistical significance
  • Subject lines and opening hooks have highest testing ROI
  • Document all tests and results for learning
  • Iterate on winners—Version B winning creates opportunity for Version C
  • Test within segments, not across different ICPs
  • Look for 15%+ lifts for confidence in declaring winners

Sources:

Related Terms

A

ABC (Always Be Closing)

Traditional sales mindset focused solely on closing deals. Modern approach: Always Be Connecting.

A

ABM (Account-Based Marketing)

Marketing strategy treating individual accounts as markets. Highly personalized campaigns for high-value targets.

A

ABS (Account-Based Selling)

Sales approach targeting specific high-value accounts with personalized outreach. Inverts traditional funnel.

A

Account

Customer or prospect record containing purchase history, interactions, and contact information.

PRODUCT

Inbox PlacementEmail WarmupRoadmapFeedbackPlatform StatusChangelogsLaunch Offer

COMPANY

Affiliate ProgramAlternativesSales GlossaryPrivacy PolicyTerms of ServiceCookie PolicyRefund PolicySupport PolicyAccount Suspenion PolicySocial Media Conduct Policy

MASTERCLASS

All ChaptersWhy Cold Email Still WorksCold Email Mindset ShiftBuilding Your FoundationInbox Warm-Up StrategyList Building & ResearchWriting Cold Emails That Get RepliesPersonalization at ScaleFollow-Up Sequences That ConvertCold Email Deliverability MasteryMulti-Channel OutreachAI-Powered Cold Email in 2026Measuring Cold Email PerformanceCompliance and Legal RequirementsScaling Your Cold Email OperationAdvanced Strategies Most People Never Try

FirstSales Logo

Smart tools to analyze, optimize, and grow your online presence.

© 2026 FirstSales.io All rights reserved.