Create A/B tests by chatting with AI and launch them on your website within minutes.

Try it for FREE now

T-test

Quick answer

A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups, commonly applied in A/B testing to compare average metrics like revenue per user or time on site.

Key takeaways

  • T-test helps evaluate whether an experiment result is reliable enough to act on.
  • It should be reviewed together with sample size, duration, effect size, and business impact.
  • It is most useful when the hypothesis and primary metric are defined before the test starts.

Definition

A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups, commonly applied in A/B testing to compare average metrics like revenue per user or time on site.

What T-test means in A/B testing

The t-test calculates a t-statistic that measures the ratio of the difference between group means to the variability within the groups. It produces a p-value that indicates the probability of observing such a difference if no real effect exists. T-tests are particularly appropriate for continuous metrics with normally distributed data and are most reliable with adequate sample sizes.

Why T-test matters

T-tests enable experimenters to make statistically rigorous decisions about whether observed differences in continuous metrics are real or simply due to random chance. They're essential for analyzing metrics beyond simple conversion rates, such as average order value, session duration, or pages per visit. Understanding when and how to apply t-tests helps ensure accurate interpretation of test results for revenue and engagement metrics.

Example of T-test

After running an A/B test on your checkout page redesign, you use a t-test to analyze whether the average order value of $87.50 in the variation is significantly higher than the $82.30 in the control, determining with 95% confidence that the increase is statistically significant and not due to random variation.

How to use T-test

Use T-test after you have chosen a primary metric and collected enough traffic for a reliable read. Avoid checking it in isolation; compare it with effect size, confidence, practical impact, and whether the test ran long enough to cover normal traffic patterns.

Common mistake

A common mistake is treating T-test as a yes-or-no shortcut while ignoring sample size, test duration, and practical business impact. A statistically interesting result can still be too small, too noisy, or too risky to ship.

Related A/B testing terms

FAQ

What does t-test mean in A/B testing?

A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups, commonly applied in A/B testing to compare average metrics like revenue per user or time on site.

Why does t-test matter for experiments?

T-tests enable experimenters to make statistically rigorous decisions about whether observed differences in continuous metrics are real or simply due to random chance. They're essential for analyzing metrics beyond simple conversion rates, such as average order value, session duration, or pages per visit. Understanding when and how to apply t-tests helps ensure accurate interpretation of test results for revenue and engagement metrics.

How should teams use t-test in an experiment?

Use T-test after you have chosen a primary metric and collected enough traffic for a reliable read. Avoid checking it in isolation; compare it with effect size, confidence, practical impact, and whether the test ran long enough to cover normal traffic patterns.

Download our free 100 point Ecommerce CRO Checklist

This comprehensive checklist covers all critical pages, from homepage to checkout, giving you actionable steps to boost sales and revenue.