Create A/B tests by chatting with AI and launch them on your website within minutes.

Try it for FREE now

Expected Loss

Quick answer

Expected loss is the average amount of value (revenue, conversions, or other metrics) you would lose by choosing a particular variation if it turns out to be inferior, calculated by integrating the loss function over the posterior probability distribution. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Key takeaways

  • Expected Loss helps evaluate whether an experiment result is reliable enough to act on.
  • It should be reviewed together with sample size, duration, effect size, and business impact.
  • It is most useful when the hypothesis and primary metric are defined before the test starts.

Definition

Expected loss is the average amount of value (revenue, conversions, or other metrics) you would lose by choosing a particular variation if it turns out to be inferior, calculated by integrating the loss function over the posterior probability distribution.

What Expected Loss means in A/B testing

Expected loss represents the risk associated with each possible decision in an A/B test, weighted by the probability of each outcome. It's calculated separately for the decision to implement each variation, accounting for all scenarios in which that choice could be wrong and their associated costs. When the expected loss of choosing the best-performing variation becomes acceptably small, you have sufficient evidence to conclude the test.

Why Expected Loss matters

Expected loss provides a practical, business-oriented stopping criterion for A/B tests that's more meaningful than p-values or confidence levels. It directly answers the question 'how much could we lose by making this decision now?' enabling teams to balance the cost of uncertainty against the cost of delayed implementation. Using expected loss thresholds aligned with business tolerance for risk leads to more efficient testing and better ROI from experimentation programs.

Example of Expected Loss

Your test shows variation B leading with a 2.5% conversion rate versus control's 2.3%, but the expected loss of choosing B is still $3,500 per week, exceeding your $1,000 risk threshold. You continue the test until more data reduces the expected loss to an acceptable level before implementing the change.

How to use Expected Loss

Use Expected Loss after you have chosen a primary metric and collected enough traffic for a reliable read. Avoid checking it in isolation; compare it with effect size, confidence, practical impact, and whether the test ran long enough to cover normal traffic patterns.

Common mistake

A common mistake is treating Expected Loss as a yes-or-no shortcut while ignoring sample size, test duration, and practical business impact. A statistically interesting result can still be too small, too noisy, or too risky to ship.

Related A/B testing terms

FAQ

What does expected loss mean in A/B testing?

Expected loss is the average amount of value (revenue, conversions, or other metrics) you would lose by choosing a particular variation if it turns out to be inferior, calculated by integrating the loss function over the posterior probability distribution. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Why does expected loss matter for experiments?

Expected loss provides a practical, business-oriented stopping criterion for A/B tests that's more meaningful than p-values or confidence levels. It directly answers the question 'how much could we lose by making this decision now?' enabling teams to balance the cost of uncertainty against the cost of delayed implementation. Using expected loss thresholds aligned with business tolerance for risk leads to more efficient testing and better ROI from experimentation programs.

How should teams use expected loss in an experiment?

Use Expected Loss after you have chosen a primary metric and collected enough traffic for a reliable read. Avoid checking it in isolation; compare it with effect size, confidence, practical impact, and whether the test ran long enough to cover normal traffic patterns.

Download our free 100 point Ecommerce CRO Checklist

This comprehensive checklist covers all critical pages, from homepage to checkout, giving you actionable steps to boost sales and revenue.