A
A/A Testing
A/A testing is a method used in website optimization where the same webpage or other marketing material is tested against itself. It is mainly conducted to check if the testing tools are working properly and not erroneously providing false results.
Learn MoreA/B Testing
A/B testing or split testing is a method of comparing two versions of a web page or other user experience to determine which one performs better. It's a way to test changes to your webpage against the current design and determine which one produces better results.
Learn MoreAbove The Fold
Above the fold refers to the portion of a webpage that is immediately visible in the browser viewport when the page first loads, without any scrolling required. The term originates from newspaper publishing, where the most important content appeared on the top half of the folded front page. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreAdobe Commerce
Adobe Commerce (formerly Magento) is an enterprise-grade, open-source e-commerce platform offering extensive customization and scalability for large, complex online retail operations. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreAlpha
Alpha is the significance level threshold used in hypothesis testing that represents the probability of making a Type I Error, or the acceptable risk of detecting a false positive result.
Learn MoreAlternative Hypothesis
Alternative Hypothesis is the statement in hypothesis testing that proposes there is a real, measurable difference between the control and treatment variations in an A/B test.
Learn MoreAnalysis
Analysis in marketing refers to the process of examining and interpreting data or information to guide business decisions. It involves gathering data from various sources, such as sales figures, customer feedback, and market trends, and then using that data to evaluate the effectiveness of your marketing strategies, identify opportunities for improvement, and make informed decisions about future marketing efforts. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.
Learn MoreAnti-flickering Script
An anti-flickering script is a code snippet that temporarily hides page content while an A/B testing tool loads and applies variations, preventing visitors from seeing the original content before it changes to the test variation. It eliminates the visual flash that occurs during variation rendering.
Learn MoreAsynchronous Loading
Asynchronous loading is a technique where web page elements, scripts, or resources load independently without blocking the rendering of other page content. Scripts marked as asynchronous download in parallel with page parsing and execute as soon as they're available, without waiting for or delaying other resources. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreAverage Revenue per User (ARPU)
Average Revenue per User (ARPU) is a performance metric that illustrates the average revenue generated from each user or customer of your service or product within a specific time frame. It is calculated by dividing the total revenue made from customers or users by the total number of users within that time period. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreB
Baseline
A baseline in marketing is the starting point by which you measure change or improvement in a campaign or strategy. It's a reference point that allows you to compare past performance to current performance after implementing new changes or strategies. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreBayes Theorem
Bayes theorem is a mathematical formula that describes how to update the probability of a hypothesis based on new evidence, forming the foundation of Bayesian A/B testing by combining prior beliefs with observed data to produce posterior probabilities.
Learn MoreBayesian Statistics
Bayesian Statistics is a statistical approach that treats probability as a degree of belief and continuously updates the probability of a hypothesis being true as new data is collected during an A/B test.
Learn MoreBelow the fold
Below the fold is derived from the print newspaper terminology where the most important stories were placed "above the fold" to grab the attention of potential buyers. Similarly, in the digital space, below the fold refers to the portion of a webpage that is not immediately visible when the page loads, and the user must scroll down to see it. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreBenchmarking
Benchmarking in A/B testing means comparing the current performance of a page, funnel, metric, or experiment against a reference point such as a historical baseline, industry standard, competitor pattern, or previous test result. It gives teams context for judging whether a test result is actually meaningful.
Learn MoreBeta
Beta is the probability of making a Type II Error in hypothesis testing, representing the risk of failing to detect a true difference between variations when one actually exists.
Learn MoreBigCommerce
BigCommerce is a SaaS e-commerce platform that provides enterprise-level features and flexibility for mid-market to large online retailers without transaction fees. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreBounce Rate
Bounce rate is a metric that represents the percentage of visitors who enter your website and then leave ("bounce") without viewing any other pages or taking any further action. It essentially means they have not interacted more deeply with the site. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreC
Call-to-Action (CTA)
A Call-to-Action (CTA) is a prompt on a website that tells the user to take some specified action. This can be in the form of a button, link, or image designed to encourage the user to click and continue down a conversion funnel. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreCDN
A CDN (Content Delivery Network) is a geographically distributed network of servers that cache and deliver website content from locations closest to end users, reducing latency and improving page load speeds. It stores copies of static assets like images, CSS, JavaScript, and videos across multiple data centers worldwide. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreChance to win
Chance to win in A/B testing is the estimated probability that a variant will outperform the control on the chosen primary metric. It is most common in Bayesian-style reporting, where results are expressed as probabilities rather than only p-values.
Learn MoreChi-square Test
A chi-square test is a statistical method used to determine whether there is a significant association between categorical variables, most commonly applied in A/B testing to compare conversion rates or other binary outcome metrics between variations.
Learn MoreClick Through Rate (CTR)
The Click Through Rate (CTR) is a metric that measures the number of clicks advertisers receive on their ads per number of impressions. It is a critical measurement for understanding the efficiency and effectiveness of a specific marketing campaign or advertisement. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreCohort
A cohort is a group of users who share a common characteristic or experience within a designated time period. In marketing, cohorts are often used for analyzing behaviors and trends or making comparisons among groups. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreConfidence Interval
A confidence interval is a range of values, derived from a statistical calculation, that is likely to contain an unknown population parameter. In marketing, it is often used in A/B testing to determine if the variation of a test actually improves the result.
Learn MoreConfidence level
A confidence level refers to the statistical measure in an A/B test that provides an assurance or degree of certainty about the reliability of the result. For example, a 95% confidence level means that the likelihood of the observed difference between two versions has a 95% chance of being accurate, and is not due to random chance.
Learn MoreConfounding Variables
Confounding variables are external factors that influence both the independent variable (the change being tested) and the dependent variable (the metric being measured), creating a false or misleading association between them.
Learn MoreControl
In the context of A/B testing and marketing, a control is the original, unchanged version of a webpage, email, or other piece of marketing content that is used as a benchmark to compare against a modified version, known as the variant. The performance of the control versus the variant helps determine whether the changes lead to improved results, like higher clickthrough rates, conversions, or other goals.
Learn MoreControl Group
A Control Group refers to a set of users in an A/B test who are exposed to the existing or 'control' version of your website, product, or marketing campaign. This group is used to compare the behavior and performance against those who experienced the new or ‘test’ version.
Learn MoreConversion Rate
The conversion rate is the percentage of users who take a desired action on your website or in your marketing campaign. It's calculated by dividing the number of conversions by the total number of visitors. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreCookies
A Cookie is a small piece of data stored on a user's computer by the web browser while browsing a website. These cookies help websites remember information about the user's visit, like preferred language and other settings, thus providing a smoother and more personalized browsing experience. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreCorrelation
In marketing, correlation is a statistical measurement that describes the relationship between two variables. It is used to understand the influence of one variable on another. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreCovariance
Covariance is a statistical measure that helps you understand how two different variables move together. It's used to gauge the linear relationship between these variables. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreCredible Interval
A credible interval is a range of values within which a parameter (such as conversion rate or effect size) lies with a specified probability in Bayesian analysis, representing the uncertainty around an estimate after observing data. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreD
DebugBear
DebugBear is a website performance monitoring and optimization tool that provides continuous tracking of Core Web Vitals, page speed metrics, and detailed performance analysis for web pages. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreDrag-and-drop Technology
Drag-and-drop Technology is a user interface feature in A/B testing and website building tools that allows users to visually move, add, or modify elements on a webpage without writing code.
Learn MoreE
Ecommerce Platform
An Ecommerce Platform is software that enables businesses to build, manage, and operate online stores, providing essential functionality for product display, transactions, and order management. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreEffect Size
Effect size refers to the magnitude or intensity of a statistical phenomenon or experiment result. In simpler terms, it measures how big of an effect a certain factor or variable has in a study or test.
Learn MoreEngagement rate
Engagement rate is a metric used in digital marketing to measure the level of interaction or engagement that a piece of content receives from an audience. It includes actions like likes, shares, comments, clicks etc. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreEntry Page
An Entry Page is the first page that a visitor lands on when they come to your website from an external source, such as a search engine, social media link, or another website. It acts as the first impression of your website for many visitors. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreError Rate
The error rate is the percentage of errors that occur in a certain process or action, often in reference to online activities or technical processes. In a marketing context, it might refer to the percentage of failed or incorrect actions such as unsuccessful page loads or incomplete transactions. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreExit Intent
Exit intent is a technology used in digital marketing to detect when a site visitor is about to leave the website or page. It usually triggers a pop-up or special message attempting to convince the user to stay on the page or take some action like signing up for a newsletter, purchasing a product, or downloading a resource. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreExit Page
An exit page refers to the last web page that a visitor views before they leave your website. It's where the visitor's session on your site ends. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreExit Rate
The exit rate is the percentage of visitors who leave your website from a specific page. This metric is used to identify which pages are the final destination before a visitor leaves, indicating possible issues with those pages. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreExpected Loss
Expected loss is the average amount of value (revenue, conversions, or other metrics) you would lose by choosing a particular variation if it turns out to be inferior, calculated by integrating the loss function over the posterior probability distribution. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreExperience Optimization
Experience optimization , often abbreviated as EXO, refers to the use of various techniques, tools, and methodologies to improve the user experience during interactions with a product, system, or service. This could be an online experience, such as website navigation or mobile app use, or offline experiences such as customer service or sales interactions. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreF
False Negative
A false negative is a result that appears negative when it should not be. In marketing terms, a false negative could be when a test fails to identify a potential improvement or success in a campaign, ad or email.
Learn MoreFalse Positive
A false positive in marketing terms refers to a result that incorrectly indicates that a particular condition or attribute is present. For instance, in A/B testing, a false positive could occur when a test indicates that a new webpage design is significantly better at driving conversions when it is not really.
Learn MoreFlickering
Flickering is the brief visual flash or content shift that occurs when a page initially loads with original content and then visibly changes to display an A/B test variation after the testing script executes. It creates a jarring user experience where visitors see the page transform before their eyes.
Learn MoreFrequentist Statistics
Frequentist Statistics is the traditional statistical approach used in A/B testing that determines whether results are significant by calculating the probability of observing the data (or more extreme data) if the null hypothesis were true.
Learn MoreFunnel
A funnel in marketing refers to the journey that a potential customer takes from their first interaction with your brand to the ultimate goal of conversion. It's often described as a funnel because many people will become aware of your business or product (the widest part of the funnel), but only a portion of those will move further down the funnel to consider your offering, and even fewer will proceed to the final step of making a purchase (the narrowest part of the funnel). In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreG
H
Heatmapping
is a data visualization tool that shows where users have clicked, scrolled, or moved their mouse on your website. It uses colors to represent different levels of activity - warm colors like red and orange signify areas where users interact the most, while cool colors like blue signify less interaction. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreHTTP Requests
HTTP Requests are individual calls made by a web browser to a server to fetch resources like HTML files, stylesheets, scripts, images, fonts, and other assets needed to render a webpage. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreHypothesis
A hypothesis in marketing terms is an assumed outcome or predicted result of a marketing campaign or strategy before it is implemented. It is a statement that forecasts the relationship between variables, such as how a change in a marketing approach (like altering a CTA button color) might affect conversions. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreHypothesis Test
A Hypothesis Test is a statistical method used in A/B testing where you test the validity of a claim or idea about a population parameter. In the context of A/B testing, it's a way to prove or disprove the assumption that a particular change (like a new webpage design or marketing strategy) will increase conversions or other key metrics.
Learn MoreL
Landing Page Optimization
Landing Page Optimization refers to the process of improving or enhancing each element on your landing page to increase conversions. These elements may include the headline, call-to-action, images, or copy. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreLargest Contentful Paint
Largest Contentful Paint (LCP) is a Core Web Vitals metric that measures the time it takes for the largest visible content element (image, video, or text block) to render on the screen from when the page first starts loading. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreLaw of Large Numbers
The Law of Large Numbers is a statistical principle stating that as sample size increases, the observed average of results will converge toward the true expected value of the population. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreLCP
LCP (Largest Contentful Paint) is a Core Web Vital metric that measures how long it takes for the largest visible content element on a page to fully render from when the user first navigates to the URL. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreLong-run Frequency
Long-run frequency is a frequentist interpretation of probability that defines the likelihood of an event as the proportion of times it would occur if an experiment were repeated infinitely under identical conditions. It represents the observed frequency of outcomes over many trials rather than a subjective belief.
Learn MoreLoss Function
A loss function quantifies the cost or negative consequence of making a wrong decision in A/B testing, typically measuring the expected loss in revenue, conversions, or other key metrics that would result from choosing an inferior variation.
Learn MoreM
Metrics
Metrics are measurements or data points that track and quantify various aspects of marketing performance. These can include factors like click-through rates, conversion rates, bounce rates, and more. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.
Learn MoreMinimum Detectable Effect
The Minimum Detectable Effect (MDE) is a crucial concept in experiment design and A/B testing. It represents the smallest change in a metric that an experiment can reliably detect.
Learn MoreMulti arm bandit
A multi-arm bandit is a statistical method used in marketing for testing multiple strategies, offers, or options concurrently to determine which one performs best. Similar to A/B testing, but instead of splitting the audience evenly among all options, a multi-arm bandit test dynamically adjusts the traffic allocation to each option based on their ongoing performance.
Learn MoreMultiple Testing
Multiple Testing is a statistical challenge that occurs when conducting multiple simultaneous hypothesis tests or comparisons, increasing the probability of finding false positive results purely by chance.
Learn MoreMultivariate Analysis
Multivariate Analysis is a statistical technique used to analyze data that comes from more than one variable. This process allows marketers to understand how different variables (like design, color, location, etc. In A/B testing, it helps teams define how an experiment is structured, measured, and interpreted before they act on the result.
Learn MoreMultivariate Testing (MVT)
Multivariate Testing (MVT) is a process where multiple variables on a webpage are simultaneously tested to determine the best performing combinations and layouts. Unlike A/B testing that tests one change at a time, MVT allows you to test numerous changes and see how they interact with each other.
Learn MoreN
Normalization
Normalization is a process used in data analysis to adjust the values measured on different scales to a common scale. This is often done in preparation for data comparison or statistical analysis, ensuring the results are accurate and meaningful. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreNull Hypothesis
A null hypothesis is a statistical concept that assumes there is no significant difference or relation between certain aspects of a study or experiment. In other words, it's the hypothesis that your test is aiming to disprove.
Learn MoreNull Hypothesis Significance Testing
Null Hypothesis Significance Testing (NHST) is a statistical method used to determine whether observed differences between test variations are statistically significant or likely due to random chance. It involves testing a null hypothesis that assumes no difference exists between variations against an alternative hypothesis that a difference does exist.
Learn MoreO
One-Tailed Test
A One-Tailed Test is a statistical method used in hypothesis testing. It's a directional test that helps to determine if a set of data has a greater or lesser value than a specific value or point.
Learn MoreOptimization
Optimization in marketing terms refers to the process of making changes and adjustments to various components of a marketing campaign to improve its effectiveness and efficiency. These modifications may involve aspects such as website design, ad copy, SEO strategies or other marketing tactics. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreP
P-hacking
P-hacking, also known as data dredging, is a method in which data is manipulated or selection criteria are modified until a desired statistical result, typically a statistically significant result, is achieved. It involves testing numerous hypotheses on a particular dataset until the data appears to support one.
Learn MoreP-value
A p-value in marketing A/B testing is a statistical measure that helps determine whether the difference in conversion rates between two versions of a page is statistically significant or just due to chance. It represents the probability that the differences observed occurred randomly.
Learn MorePersonalization
Personalization refers to the method of tailoring the content and experience of a website or marketing message based on the individual user's specific characteristics or behaviors. These may include location, browsing history, past purchases, and other personal preferences. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MorePersonalization Testing
This is a process of customizing the user experience on a website or app by offering content, recommendations, or features based on individual user’s behavior, preferences, or demographics. The purpose of personalization testing is to determine the most effective personalized experience that encourages a user to take desired action such as making a purchase, signing up for a newsletter or any other conversion goals.
Learn MorePopulation
In marketing, the population refers to the total group of people that a company or business is interested in reaching with their marketing efforts. This might be all potential customers, a specific geographic area, or a targeted demographic. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MorePosterior Probability
Posterior probability is the updated probability of a hypothesis being true after taking into account new evidence or data, calculated using Bayesian statistical methods by combining prior beliefs with observed experimental results.
Learn MorePower of a Test
This term refers to the ability of a statistical test to detect a difference when one actually exists. It measures the test’s sensitivity or its capacity to correctly identify true effects.
Learn MorePrior Belief
Prior belief is the probability distribution representing your initial assumptions or existing knowledge about a parameter (such as conversion rate) before collecting new data from an experiment, serving as the starting point for Bayesian analysis.
Learn MoreProbability
Probability is a statistical term that measures the likelihood of an event happening. In marketing, it's used to predict outcomes such as the chance a visitor will click a link, buy a product, or engage with content. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreProbability Distribution
A Probability Distribution is a mathematical function that provides the possibilities of occurrence of different possible outcomes in an experiment. In simple words, it shows the set of all possible outcomes of a certain event and how likely they are to occur.
Learn MoreR
Randomization
Randomization in marketing refers to the method of assigning participants in a test, such as an A/B test, to different groups without any specific pattern. It ensures that the test is fair and unbiased, and that any outcome differences between the groups can be attributed to the changes being tested, not some pre-existing factor or variable.
Learn MoreRandomization Bias
Randomization bias occurs when the process of randomly assigning users to test variations is flawed or compromised, resulting in systematic differences between groups that can skew test results.
Learn MoreRegression Analysis
Regression Analysis is a statistical method used in marketing to understand the relationship between different variables. It helps predict how a change in one variable, often called the independent variable, can affect another variable, known as the dependent variable. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreRetention
Retention refers to the ability to keep or hold on to something, such as customers or users, over a certain period of time. In marketing, it's about the strategies and tactics businesses use to encourage customers to continue using their product or service, rather than switching to a competitor. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreReturn on Investment (ROI)
Return on Investment (ROI) is a performance measure that is used to evaluate the efficiency or profitability of an investment, or to compare the efficiency of different investments. It's calculated by dividing the profit from an investment (return) by the cost of that investment. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreRevenue per Visitor (RPV)
Revenue Per Visitor (RPV) is a measure used in online business to determine the amount of money generated from each visitor to a website. It's calculated by dividing the total revenue by the total number of visitors. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreS
Sample Size
Sample size refers to the number of individual data points or subjects included in a study or experiment. In the context of A/B testing or marketing, the sample size is the total number of people or interactions (like email opens, webpage visits, or ad viewers) you measure to gather data for your test or analysis.
Learn MoreScript Execution Time
Script Execution Time is the duration the browser's JavaScript engine spends parsing, compiling, and running JavaScript code on a webpage. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreSecondary Action
A Secondary Action is an alternative operation that a user can take on a webpage apart from the primary goal or action. This can be actions like "Save for later," "Add to wishlist," or "Share with a friend. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreSegmentation
Segmentation is the process of dividing your audience or customer base into distinct groups based on shared characteristics, such as age, location, buying habits, interests, and more. By segmenting your audience, you can create more targeted and personalized marketing campaigns that better address the needs and wants of specific groups, leading to higher engagement and conversion rates. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreServer Latency
Server latency is the time delay between when a server receives a request and when it begins sending a response, representing the duration required for the server to process the request. It measures server-side processing efficiency independent of network transmission time. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreServer-Side Testing
Server-Side Testing is a type of A/B testing where the test variations are rendered on the server before the webpage or app is delivered to the user's browser or device. This type of testing allows for deeper, more complex testing because it involves the back-end systems, and it's particularly useful for testing performance optimization changes such as load times or response times.
Learn MoreShopify
Shopify is a fully-hosted, subscription-based e-commerce platform that enables businesses to create and manage online stores without handling technical infrastructure. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreSignificance Level
The significance level, often denoted by the Greek letter alpha (α), is a threshold that a statistical test must exceed to be considered statistically significant. It's a probability value that determines whether you should reject or fail to reject the null hypothesis in a hypothesis testing.
Learn MoreSplit URL Testing
Split URL Testing , also known as A/B testing, is a method used to compare two versions of a webpage to see which one performs better. In this test, the traffic to your website is divided between the original webpage (version A) and a different version of the webpage (version B) to see which one leads to more conversions or achieves your designated goal more effectively.
Learn MoreStandard Deviation
Standard Deviation is a statistical term that measures the amount of variability or dispersion in a set of data values. In simpler terms, it shows how much the data varies from the average or mean. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreStart Time To Variant
Start Time To Variant (STTV) is a performance metric that measures the elapsed time from when a page begins loading until the A/B testing variant is fully applied and visible to the user.
Learn MoreStatistical Power
Statistical Power is the probability that a test will correctly reject a false null hypothesis. In other words, it's the likelihood that if there actually is a difference (in the case of A/B testing, a difference between the two versions being tested), the test will detect it.
Learn MoreStatistical Significance
Statistical Significance is the determination that an observed difference between test variations is unlikely to have occurred by chance alone, typically indicated when the p-value falls below the predetermined alpha threshold.
Learn MoreStatistically Significant
This term refers to a result that is unlikely to have occurred by chance. In marketing and A/B testing, it's used to indicate that a certain change or difference (like a higher click-through rate or more conversions) is not just a random occurence, but is significant enough to be considered a meaningful result.
Learn MoreSTTV
STTV is the acronym for Start Time To Variant, representing the duration between page load initiation and the moment an A/B test variant becomes visible to users.
Learn MoreSubjective Probability
Subjective probability is a Bayesian interpretation of probability that represents an individual's degree of belief or confidence about an uncertain event, based on available evidence and prior knowledge. Unlike frequentist probability, it treats probability as a measure of personal certainty rather than long-run frequency. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreT
T-test
A t-test is a statistical hypothesis test used to determine whether there is a significant difference between the means of two groups, commonly applied in A/B testing to compare average metrics like revenue per user or time on site.
Learn MoreTest Group
A test group refers to a group of individuals in a marketing A/B or split testing who are exposed to a new version of a certain marketing element such as a webpage, email, or ad. The behaviors, interactions, and responses of this group to the new element are then tracked and compared with those of a control group (who see the unaltered or original version), to assess the effectiveness and performance of the change or variation made.
Learn MoreTesting Period
A Testing Period is a designated amount of time during which you run an experiment or test, like an A/B test, to evaluate the performance of a particular marketing campaign, webpage, or feature. The length of a testing period can vary based on the objectives of the test, the traffic your website receives and the statistical significance you want to achieve.
Learn MoreTime to First Byte
Time to First Byte (TTFB) is the measurement of how long a browser waits to receive the first byte of data from a server after making an HTTP request. It represents the sum of redirect time, DNS lookup, server processing, and network latency before content begins downloading. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreTimeout Setting
Timeout Setting is a configurable parameter in A/B testing tools that determines how long the system waits before defaulting to a control experience when test variations fail to load.
Learn MoreTracking Code
A tracking code is a piece of script or a unique identifier added to a URL or webpage to monitor and track user behavior on a website. This information is crucial in understanding the effectiveness of marketing efforts, studying traffic sources, user interactions, and subsequent conversions. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreTraffic Quality
Traffic Quality refers to the relevance and engagement level of the visitors coming to your website. High quality traffic generally indicates visitors who are interested in your business, product, or service, engage with your website content, and are more likely to complete a desired action such as making a purchase or signing up for a service. In A/B testing, it helps teams connect a term, metric, or behavior to a clearer optimization decision.
Learn MoreTransaction Fees
Transaction Fees are charges levied by e-commerce platforms or payment processors as a percentage of each sale processed through an online store, separate from payment processing costs. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn MoreTreatment
Treatment in the context of A/B testing and marketing refers to a specific version or variation of a webpage, email, or other piece of content that is being tested against others. It's the change you want to test against the current version (often called the 'control') to see if it improves the performance or effectiveness of the page or content in question.
Learn MoreTreatment Group
Treatment Group is the set of users in an A/B test who are exposed to the new variation or experimental condition being tested, as opposed to the control group which sees the original version.
Learn MoreTwo-Tailed Test
A Two-Tailed Test is a statistical test used in A/B testing where a hypothesis is made about a parameter such as the mean. It tests for the possibility of the relationship in both directions, whether the test statistic is either more extreme than or less than a certain value, but not both.
Learn MoreType I Error
Type I Error is a false positive result that occurs when an A/B test incorrectly concludes there is a significant difference between variations when no true difference exists.
Learn MoreType II Error
Type II Error is a false negative result that occurs when an A/B test fails to detect a real difference between variations, incorrectly concluding there is no significant effect when one actually exists.
Learn MoreU
Uncompressed Size
Uncompressed Size is the total file size of web assets (HTML, CSS, JavaScript, images) before any compression algorithms like Gzip or Brotli are applied. In A/B testing, it helps teams protect page speed and user experience while variants, scripts, and tracking are running.
Learn MoreUsability Testing
This a technique used to evaluate a product or website by testing it on users. It involves observing users as they attempt to complete tasks using the product, typically while they're thinking out loud.
Learn MoreUser Experience (UX)
User Experience (UX) refers to the overall experience a person has when interacting with a website, application, or digital product. It involves the design of the interface, usability, accessibility, and efficiency in achieving the user's goals. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreUser Interface (UI)
User Interface (UI) is what people interact with when using a digital product or service, like a website, app, or software program. It includes all the screens, buttons, icons, and other visual elements that help a user to communicate with a device or application. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreUser Segmentation
User segmentation refers to the practice of dividing your audience or customers into subgroups based on common characteristics such as demographics, buying habits, interests, engagement, etc. This practice enables businesses to tailor their marketing strategies and messages to resonate better with different audiences, thereby improving relevance and effectiveness. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreUser Testing
User testing is a process by which real users interact with a product, software, or website while their actions and reactions are observed by the product team. It's used to gauge the usability and user-friendliness of the product, identify any areas of confusion or frustration, and gather feedback for improvements.
Learn MoreV
Variance
Variance is a statistical term that measures how much a set of data varies or deviates from the mean or average in a dataset. It's a crucial component in data analysis to understand the distribution of your data. In A/B testing, it helps teams describe uncertainty, compare variants, and decide whether an observed lift is reliable enough to act on.
Learn MoreVariant
Variant is any version of a webpage, feature, or element being tested in an A/B or multivariate test, including both the original control version and any modified treatment versions.
Learn MoreVariation
Variation in marketing is a version of a webpage, ad, or any other part of a marketing campaign that is slightly different from the original. During A/B testing, different variations are used to see which performs better with your audience.
Learn MoreW
Website Goals
These are specific, measurable objectives set for your website. Goals can include anything from increasing visitor engagement, driving more traffic to the site, getting visitors to sign up for newsletters or complete a purchase. In A/B testing, it helps teams explain which part of the visitor experience changed and why that change could affect conversion behavior.
Learn MoreWebsite Optimization
Website optimization is the process of using controlled experimentation to improve a website's ability to drive business goals. Website owners implement A/B testing to experiment with variations on pages of their website to determine which changes will ultimately result in more conversions.
Learn MoreWooCommerce
WooCommerce is an open-source e-commerce plugin built for WordPress that transforms standard WordPress websites into fully functional online stores. In A/B testing, it helps ecommerce teams connect a page change to purchase behavior, revenue quality, and customer trust.
Learn More