A/B Testing Terms Every Growth Marketer Should Know [Glossary]

To help you master the lingo and become a more effective marketer, we've assembled this comprehensive list of A/B testing terminologies. Whether you're a seasoned professional, new to the industry, or just curious about how cutting-edge marketers convert leads, this valuable resource will help you stay informed and up-to-date.

A

A/A Testing

A/A testing is a method used in website optimization where the same webpage or other marketing material is tested against itself. It is mainly conducted to check if the testing tools are working properly and not erroneously providing false results.

Learn More

A/B Testing

A/B testing or split testing is a method of comparing two versions of a web page or other user experience to determine which one performs better. It's a way to test changes to your webpage against the current design and determine which one produces better results.

Learn More

Above The Fold

Above the fold refers to the portion of a webpage that is immediately visible in the browser viewport when the page first loads, without any scrolling required. The term originates from newspaper publishing, where the most important content appeared on the top half of the folded front page. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Adobe Commerce

Adobe Commerce (formerly Magento) is an enterprise-grade, open-source e-commerce platform offering extensive customization and scalability for large, complex online retail operations. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Alpha

Alpha is the significance level threshold used in hypothesis testing that represents the probability of making a Type I Error, or the acceptable risk of detecting a false positive result.

Learn More

Alternative Hypothesis

Alternative Hypothesis is the statement in hypothesis testing that proposes there is a real, measurable difference between the control and treatment variations in an A/B test.

Learn More

Analysis

Analysis in marketing refers to the process of examining and interpreting data or information to guide business decisions. It involves gathering data from various sources, such as sales figures, customer feedback, and market trends, and then using that data to evaluate the effectiveness of your marketing strategies, identify opportunities for improvement, and make informed decisions about future marketing efforts. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.

Learn More

Anti-flickering Script

An anti-flickering script is a code snippet that temporarily hides page content while an A/B testing tool loads and applies variations, preventing visitors from seeing the original content before it changes to the test variation. It eliminates the visual flash that occurs during variation rendering.

Learn More

Asynchronous Loading

Asynchronous loading is a technique where web page elements, scripts, or resources load independently without blocking the rendering of other page content. Scripts marked as asynchronous download in parallel with page parsing and execute as soon as they're available, without waiting for or delaying other resources. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Average Revenue per User (ARPU)

Average Revenue per User (ARPU) is a performance metric that illustrates the average revenue generated from each user or customer of your service or product within a specific time frame. It is calculated by dividing the total revenue made from customers or users by the total number of users within that time period. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

B

Baseline

A baseline in marketing is the starting point by which you measure change or improvement in a campaign or strategy. It's a reference point that allows you to compare past performance to current performance after implementing new changes or strategies. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Bayes Theorem

Bayes theorem is a mathematical formula that describes how to update the probability of a hypothesis based on new evidence, forming the foundation of Bayesian A/B testing by combining prior beliefs with observed data to produce posterior probabilities.

Learn More

Bayesian Statistics

Bayesian Statistics is a statistical approach that treats probability as a degree of belief and continuously updates the probability of a hypothesis being true as new data is collected during an A/B test.

Learn More

Below the fold

Below the fold is derived from the print newspaper terminology where the most important stories were placed "above the fold" to grab the attention of potential buyers. Similarly, in the digital space, below the fold refers to the portion of a webpage that is not immediately visible when the page loads, and the user must scroll down to see it. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Benchmarking

Benchmarking in A/B testing means comparing the current performance of a page, funnel, metric, or experiment against a reference point such as a historical baseline, industry standard, competitor pattern, or previous test result. It gives teams context for judging whether a test result is actually meaningful.

Learn More

Beta

Beta is the probability of making a Type II Error in hypothesis testing, representing the risk of failing to detect a true difference between variations when one actually exists.

Learn More

BigCommerce

BigCommerce is a SaaS e-commerce platform that provides enterprise-level features and flexibility for mid-market to large online retailers without transaction fees. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Bounce Rate

Bounce rate is a metric that represents the percentage of visitors who enter your website and then leave ("bounce") without viewing any other pages or taking any further action. It essentially means they have not interacted more deeply with the site. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

C

Call-to-Action (CTA)

A Call-to-Action (CTA) is a prompt on a website that tells the user to take some specified action. This can be in the form of a button, link, or image designed to encourage the user to click and continue down a conversion funnel. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

CDN

A CDN (Content Delivery Network) is a geographically distributed network of servers that cache and deliver website content from locations closest to end users, reducing latency and improving page load speeds. It stores copies of static assets like images, CSS, JavaScript, and videos across multiple data centers worldwide. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Chance to win

Chance to win in A/B testing is the estimated probability that a variant will outperform the control on the chosen primary metric. It is most common in Bayesian-style reporting, where results are expressed as probabilities rather than only p-values.

Learn More

Chi-square Test

A chi-square test is a statistical method used to determine whether there is a significant association between categorical variables, most commonly applied in A/B testing to compare conversion rates or other binary outcome metrics between variations.

Learn More

Click Through Rate (CTR)

The Click Through Rate (CTR) is a metric that measures the number of clicks advertisers receive on their ads per number of impressions. It is a critical measurement for understanding the efficiency and effectiveness of a specific marketing campaign or advertisement. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Cohort

A cohort is a group of users who share a common characteristic or experience within a designated time period. In marketing, cohorts are often used for analyzing behaviors and trends or making comparisons among groups. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Confidence Interval

A confidence interval is a range of values, derived from a statistical calculation, that is likely to contain an unknown population parameter. In marketing, it is often used in A/B testing to determine if the variation of a test actually improves the result.

Learn More

Confidence level

A confidence level refers to the statistical measure in an A/B test that provides an assurance or degree of certainty about the reliability of the result. For example, a 95% confidence level means that the likelihood of the observed difference between two versions has a 95% chance of being accurate, and is not due to random chance.

Learn More

Confounding Variables

Confounding variables are external factors that influence both the independent variable (the change being tested) and the dependent variable (the metric being measured), creating a false or misleading association between them.

Learn More

Control

In the context of A/B testing and marketing, a control is the original, unchanged version of a webpage, email, or other piece of marketing content that is used as a benchmark to compare against a modified version, known as the variant. The performance of the control versus the variant helps determine whether the changes lead to improved results, like higher clickthrough rates, conversions, or other goals.

Learn More

Control Group

A Control Group refers to a set of users in an A/B test who are exposed to the existing or 'control' version of your website, product, or marketing campaign. This group is used to compare the behavior and performance against those who experienced the new or ‘test’ version.

Learn More

Conversion Rate

The conversion rate is the percentage of users who take a desired action on your website or in your marketing campaign. It's calculated by dividing the number of conversions by the total number of visitors. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Cookies

A Cookie is a small piece of data stored on a user's computer by the web browser while browsing a website. These cookies help websites remember information about the user's visit, like preferred language and other settings, thus providing a smoother and more personalized browsing experience. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Correlation

In marketing, correlation is a statistical measurement that describes the relationship between two variables. It is used to understand the influence of one variable on another. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Covariance

Covariance is a statistical measure that helps you understand how two different variables move together. It's used to gauge the linear relationship between these variables. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Credible Interval

A credible interval is a range of values within which a parameter (such as conversion rate or effect size) lies with a specified probability in Bayesian analysis, representing the uncertainty around an estimate after observing data. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

D

E

Ecommerce Platform

An Ecommerce Platform is software that enables businesses to build, manage, and operate online stores, providing essential functionality for product display, transactions, and order management. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Effect Size

Effect size refers to the magnitude or intensity of a statistical phenomenon or experiment result. In simpler terms, it measures how big of an effect a certain factor or variable has in a study or test.

Learn More

Engagement rate

Engagement rate is a metric used in digital marketing to measure the level of interaction or engagement that a piece of content receives from an audience. It includes actions like likes, shares, comments, clicks etc. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Entry Page

An Entry Page is the first page that a visitor lands on when they come to your website from an external source, such as a search engine, social media link, or another website. It acts as the first impression of your website for many visitors. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Error Rate

The error rate is the percentage of errors that occur in a certain process or action, often in reference to online activities or technical processes. In a marketing context, it might refer to the percentage of failed or incorrect actions such as unsuccessful page loads or incomplete transactions. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Exit Intent

Exit intent is a technology used in digital marketing to detect when a site visitor is about to leave the website or page. It usually triggers a pop-up or special message attempting to convince the user to stay on the page or take some action like signing up for a newsletter, purchasing a product, or downloading a resource. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Exit Page

An exit page refers to the last web page that a visitor views before they leave your website. It's where the visitor's session on your site ends. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Exit Rate

The exit rate is the percentage of visitors who leave your website from a specific page. This metric is used to identify which pages are the final destination before a visitor leaves, indicating possible issues with those pages. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Expected Loss

Expected loss is the average amount of value (revenue, conversions, or other metrics) you would lose by choosing a particular variation if it turns out to be inferior, calculated by integrating the loss function over the posterior probability distribution. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Experience Optimization

Experience optimization , often abbreviated as EXO, refers to the use of various techniques, tools, and methodologies to improve the user experience during interactions with a product, system, or service. This could be an online experience, such as website navigation or mobile app use, or offline experiences such as customer service or sales interactions. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

F

False Negative

A false negative is a result that appears negative when it should not be. In marketing terms, a false negative could be when a test fails to identify a potential improvement or success in a campaign, ad or email.

Learn More

False Positive

A false positive in marketing terms refers to a result that incorrectly indicates that a particular condition or attribute is present. For instance, in A/B testing, a false positive could occur when a test indicates that a new webpage design is significantly better at driving conversions when it is not really.

Learn More

Flickering

Flickering is the brief visual flash or content shift that occurs when a page initially loads with original content and then visibly changes to display an A/B test variation after the testing script executes. It creates a jarring user experience where visitors see the page transform before their eyes.

Learn More

Frequentist Statistics

Frequentist Statistics is the traditional statistical approach used in A/B testing that determines whether results are significant by calculating the probability of observing the data (or more extreme data) if the null hypothesis were true.

Learn More

Funnel

A funnel in marketing refers to the journey that a potential customer takes from their first interaction with your brand to the ultimate goal of conversion. It's often described as a funnel because many people will become aware of your business or product (the widest part of the funnel), but only a portion of those will move further down the funnel to consider your offering, and even fewer will proceed to the final step of making a purchase (the narrowest part of the funnel). In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

G

H

Heatmapping

is a data visualization tool that shows where users have clicked, scrolled, or moved their mouse on your website. It uses colors to represent different levels of activity - warm colors like red and orange signify areas where users interact the most, while cool colors like blue signify less interaction. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

HTTP Requests

HTTP Requests are individual calls made by a web browser to a server to fetch resources like HTML files, stylesheets, scripts, images, fonts, and other assets needed to render a webpage. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Hypothesis

A hypothesis in marketing terms is an assumed outcome or predicted result of a marketing campaign or strategy before it is implemented. It is a statement that forecasts the relationship between variables, such as how a change in a marketing approach (like altering a CTA button color) might affect conversions. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Hypothesis Test

A Hypothesis Test is a statistical method used in A/B testing where you test the validity of a claim or idea about a population parameter. In the context of A/B testing, it's a way to prove or disprove the assumption that a particular change (like a new webpage design or marketing strategy) will increase conversions or other key metrics.

Learn More

L

Landing Page Optimization

Landing Page Optimization refers to the process of improving or enhancing each element on your landing page to increase conversions. These elements may include the headline, call-to-action, images, or copy. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Largest Contentful Paint

Largest Contentful Paint (LCP) is a Core Web Vitals metric that measures the time it takes for the largest visible content element (image, video, or text block) to render on the screen from when the page first starts loading. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Law of Large Numbers

The Law of Large Numbers is a statistical principle stating that as sample size increases, the observed average of results will converge toward the true expected value of the population. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

LCP

LCP (Largest Contentful Paint) is a Core Web Vital metric that measures how long it takes for the largest visible content element on a page to fully render from when the user first navigates to the URL. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Long-run Frequency

Long-run frequency is a frequentist interpretation of probability that defines the likelihood of an event as the proportion of times it would occur if an experiment were repeated infinitely under identical conditions. It represents the observed frequency of outcomes over many trials rather than a subjective belief.

Learn More

Loss Function

A loss function quantifies the cost or negative consequence of making a wrong decision in A/B testing, typically measuring the expected loss in revenue, conversions, or other key metrics that would result from choosing an inferior variation.

Learn More

M

Metrics

Metrics are measurements or data points that track and quantify various aspects of marketing performance. These can include factors like click-through rates, conversion rates, bounce rates, and more. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.

Learn More

Minimum Detectable Effect

The Minimum Detectable Effect (MDE) is a crucial concept in experiment design and A/B testing. It represents the smallest change in a metric that an experiment can reliably detect.

Learn More

Multi arm bandit

A multi-arm bandit is a statistical method used in marketing for testing multiple strategies, offers, or options concurrently to determine which one performs best. Similar to A/B testing, but instead of splitting the audience evenly among all options, a multi-arm bandit test dynamically adjusts the traffic allocation to each option based on their ongoing performance.

Learn More

Multiple Testing

Multiple Testing is a statistical challenge that occurs when conducting multiple simultaneous hypothesis tests or comparisons, increasing the probability of finding false positive results purely by chance.

Learn More

Multivariate Analysis

Multivariate Analysis is a statistical technique used to analyze data that comes from more than one variable. This process allows marketers to understand how different variables (like design, color, location, etc. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.

Learn More

Multivariate Testing (MVT)

Multivariate Testing (MVT) is a process where multiple variables on a webpage are simultaneously tested to determine the best performing combinations and layouts. Unlike A/B testing that tests one change at a time, MVT allows you to test numerous changes and see how they interact with each other.

Learn More

N

O

P

P-hacking

P-hacking, also known as data dredging, is a method in which data is manipulated or selection criteria are modified until a desired statistical result, typically a statistically significant result, is achieved. It involves testing numerous hypotheses on a particular dataset until the data appears to support one.

Learn More

P-value

A p-value in marketing A/B testing is a statistical measure that helps determine whether the difference in conversion rates between two versions of a page is statistically significant or just due to chance. It represents the probability that the differences observed occurred randomly.

Learn More

Personalization

Personalization refers to the method of tailoring the content and experience of a website or marketing message based on the individual user's specific characteristics or behaviors. These may include location, browsing history, past purchases, and other personal preferences. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Personalization Testing

This is a process of customizing the user experience on a website or app by offering content, recommendations, or features based on individual user’s behavior, preferences, or demographics. The purpose of personalization testing is to determine the most effective personalized experience that encourages a user to take desired action such as making a purchase, signing up for a newsletter or any other conversion goals.

Learn More

Population

In marketing, the population refers to the total group of people that a company or business is interested in reaching with their marketing efforts. This might be all potential customers, a specific geographic area, or a targeted demographic. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Posterior Probability

Posterior probability is the updated probability of a hypothesis being true after taking into account new evidence or data, calculated using Bayesian statistical methods by combining prior beliefs with observed experimental results.

Learn More

Power of a Test

This term refers to the ability of a statistical test to detect a difference when one actually exists. It measures the test’s sensitivity or its capacity to correctly identify true effects.

Learn More

Prior Belief

Prior belief is the probability distribution representing your initial assumptions or existing knowledge about a parameter (such as conversion rate) before collecting new data from an experiment, serving as the starting point for Bayesian analysis.

Learn More

Probability

Probability is a statistical term that measures the likelihood of an event happening. In marketing, it's used to predict outcomes such as the chance a visitor will click a link, buy a product, or engage with content. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Probability Distribution

A Probability Distribution is a mathematical function that provides the possibilities of occurrence of different possible outcomes in an experiment. In simple words, it shows the set of all possible outcomes of a certain event and how likely they are to occur.

Learn More

R

Randomization

Randomization in marketing refers to the method of assigning participants in a test, such as an A/B test, to different groups without any specific pattern. It ensures that the test is fair and unbiased, and that any outcome differences between the groups can be attributed to the changes being tested, not some pre-existing factor or variable.

Learn More

Randomization Bias

Randomization bias occurs when the process of randomly assigning users to test variations is flawed or compromised, resulting in systematic differences between groups that can skew test results.

Learn More

Regression Analysis

Regression Analysis is a statistical method used in marketing to understand the relationship between different variables. It helps predict how a change in one variable, often called the independent variable, can affect another variable, known as the dependent variable. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Retention

Retention refers to the ability to keep or hold on to something, such as customers or users, over a certain period of time. In marketing, it's about the strategies and tactics businesses use to encourage customers to continue using their product or service, rather than switching to a competitor. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Return on Investment (ROI)

Return on Investment (ROI) is a performance measure that is used to evaluate the efficiency or profitability of an investment, or to compare the efficiency of different investments. It's calculated by dividing the profit from an investment (return) by the cost of that investment. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Revenue per Visitor (RPV)

Revenue Per Visitor (RPV) is a measure used in online business to determine the amount of money generated from each visitor to a website. It's calculated by dividing the total revenue by the total number of visitors. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

S

Sample Size

Sample size refers to the number of individual data points or subjects included in a study or experiment. In the context of A/B testing or marketing, the sample size is the total number of people or interactions (like email opens, webpage visits, or ad viewers) you measure to gather data for your test or analysis.

Learn More

Script Execution Time

Script Execution Time is the duration the browser's JavaScript engine spends parsing, compiling, and running JavaScript code on a webpage. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Secondary Action

A Secondary Action is an alternative operation that a user can take on a webpage apart from the primary goal or action. This can be actions like "Save for later," "Add to wishlist," or "Share with a friend. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Segmentation

Segmentation is the process of dividing your audience or customer base into distinct groups based on shared characteristics, such as age, location, buying habits, interests, and more. By segmenting your audience, you can create more targeted and personalized marketing campaigns that better address the needs and wants of specific groups, leading to higher engagement and conversion rates. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

Server Latency

Server latency is the time delay between when a server receives a request and when it begins sending a response, representing the duration required for the server to process the request. It measures server-side processing efficiency independent of network transmission time. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Server-Side Testing

Server-Side Testing is a type of A/B testing where the test variations are rendered on the server before the webpage or app is delivered to the user's browser or device. This type of testing allows for deeper, more complex testing because it involves the back-end systems, and it's particularly useful for testing performance optimization changes such as load times or response times.

Learn More

Shopify

Shopify is a fully-hosted, subscription-based e-commerce platform that enables businesses to create and manage online stores without handling technical infrastructure. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Significance Level

The significance level, often denoted by the Greek letter alpha (α), is a threshold that a statistical test must exceed to be considered statistically significant. It's a probability value that determines whether you should reject or fail to reject the null hypothesis in a hypothesis testing.

Learn More

Split URL Testing

Split URL Testing , also known as A/B testing, is a method used to compare two versions of a webpage to see which one performs better. In this test, the traffic to your website is divided between the original webpage (version A) and a different version of the webpage (version B) to see which one leads to more conversions or achieves your designated goal more effectively.

Learn More

Standard Deviation

Standard Deviation is a statistical term that measures the amount of variability or dispersion in a set of data values. In simpler terms, it shows how much the data varies from the average or mean. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

Start Time To Variant

Start Time To Variant (STTV) is a performance metric that measures the elapsed time from when a page begins loading until the A/B testing variant is fully applied and visible to the user.

Learn More

Statistical Power

Statistical Power is the probability that a test will correctly reject a false null hypothesis. In other words, it's the likelihood that if there actually is a difference (in the case of A/B testing, a difference between the two versions being tested), the test will detect it.

Learn More

Statistical Significance

Statistical Significance is the determination that an observed difference between test variations is unlikely to have occurred by chance alone, typically indicated when the p-value falls below the predetermined alpha threshold.

Learn More

Statistically Significant

This term refers to a result that is unlikely to have occurred by chance. In marketing and A/B testing, it's used to indicate that a certain change or difference (like a higher click-through rate or more conversions) is not just a random occurence, but is significant enough to be considered a meaningful result.

Learn More

STTV

STTV is the acronym for Start Time To Variant, representing the duration between page load initiation and the moment an A/B test variant becomes visible to users.

Learn More

Subjective Probability

Subjective probability is a Bayesian interpretation of probability that represents an individual's degree of belief or confidence about an uncertain event, based on available evidence and prior knowledge. Unlike frequentist probability, it treats probability as a measure of personal certainty rather than long-run frequency. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.

Learn More

T

T-test

A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups, commonly applied in A/B testing to compare average metrics like revenue per user or time on site.

Learn More

Test Group

A test group refers to a group of individuals in a marketing A/B or split testing who are exposed to a new version of a certain marketing element such as a webpage, email, or ad. The behaviors, interactions, and responses of this group to the new element are then tracked and compared with those of a control group (who see the unaltered or original version), to assess the effectiveness and performance of the change or variation made.

Learn More

Testing Period

A Testing Period is a designated amount of time during which you run an experiment or test, like an A/B test, to evaluate the performance of a particular marketing campaign, webpage, or feature. The length of a testing period can vary based on the objectives of the test, the traffic your website receives and the statistical significance you want to achieve.

Learn More

Time to First Byte

Time to First Byte (TTFB) is the measurement of how long a browser waits to receive the first byte of data from a server after making an HTTP request. It represents the sum of redirect time, DNS lookup, server processing, and network latency before content begins downloading. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Timeout Setting

Timeout Setting is a configurable parameter in A/B testing tools that determines how long the system waits before defaulting to a control experience when test variations fail to load.

Learn More

Tracking Code

A tracking code is a piece of script or a unique identifier added to a URL or webpage to monitor and track user behavior on a website. This information is crucial in understanding the effectiveness of marketing efforts, studying traffic sources, user interactions, and subsequent conversions. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Traffic Quality

Traffic Quality refers to the relevance and engagement level of the visitors coming to your website. High quality traffic generally indicates visitors who are interested in your business, product, or service, engage with your website content, and are more likely to complete a desired action such as making a purchase or signing up for a service. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.

Learn More

Transaction Fees

Transaction Fees are charges levied by e-commerce platforms or payment processors as a percentage of each sale processed through an online store, separate from payment processing costs. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.

Learn More

Treatment

Treatment in the context of A/B testing and marketing refers to a specific version or variation of a webpage, email, or other piece of content that is being tested against others. It's the change you want to test against the current version (often called the 'control') to see if it improves the performance or effectiveness of the page or content in question.

Learn More

Treatment Group

Treatment Group is the set of users in an A/B test who are exposed to the new variation or experimental condition being tested, as opposed to the control group which sees the original version.

Learn More

Two-Tailed Test

A Two-Tailed Test is a statistical test used in A/B testing where a hypothesis is made about a parameter such as the mean. It tests for the possibility of the relationship in both directions, whether the test statistic is either more extreme than or less than a certain value, but not both.

Learn More

Type I Error

Type I Error is a false positive result that occurs when an A/B test incorrectly concludes there is a significant difference between variations when no true difference exists.

Learn More

Type II Error

Type II Error is a false negative result that occurs when an A/B test fails to detect a real difference between variations, incorrectly concluding there is no significant effect when one actually exists.

Learn More

U

Uncompressed Size

Uncompressed Size is the total file size of web assets (HTML, CSS, JavaScript, images) before any compression algorithms like Gzip or Brotli are applied. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.

Learn More

Usability Testing

This a technique used to evaluate a product or website by testing it on users. It involves observing users as they attempt to complete tasks using the product, typically while they're thinking out loud.

Learn More

User Experience (UX)

User Experience (UX) refers to the overall experience a person has when interacting with a website, application, or digital product. It involves the design of the interface, usability, accessibility, and efficiency in achieving the user's goals. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

User Interface (UI)

User Interface (UI) is what people interact with when using a digital product or service, like a website, app, or software program. It includes all the screens, buttons, icons, and other visual elements that help a user to communicate with a device or application. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

User Segmentation

User segmentation refers to the practice of dividing your audience or customers into subgroups based on common characteristics such as demographics, buying habits, interests, engagement, etc. This practice enables businesses to tailor their marketing strategies and messages to resonate better with different audiences, thereby improving relevance and effectiveness. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.

Learn More

User Testing

User testing is a process by which real users interact with a product, software, or website while their actions and reactions are observed by the product team. It's used to gauge the usability and user-friendliness of the product, identify any areas of confusion or frustration, and gather feedback for improvements.

Learn More

V

W

Z

The A/B testing platform for people who
care about website performance

Mida is 10X faster than anything you have ever considered. Try it yourself.