Back to Outperform

So, you’ve most likely heard about A/B testing and how useful it can be when running experiments to determine which two (or sometimes more) versions of a variant affect your key business metrics. 

However, you may be understandably confused about what “A/A tests” are. The name suggests this testing method doesn’t introduce a new variant into the mix, so why should you use it in the first place?

Today’s article is all about explaining the why behind A/A testing and how it’s integral to accurate A/B testing, too.  

We’ll go over:

  • What is an A/A test?

  • An example of a basic A/A test

  • Why is it important to run A/A tests?

  • A/A test vs A/B test: What’s the difference?

  • A step-by-step guide to setting up an A/A test

  • How to interpret results of an A/A test

Let’s start.

What is an A/A test?

An A/A test is an experimentation technique where two identical versions of a webpage, app element, or feature (version A and version A) are tested against each other. The purpose is to verify the accuracy and reliability of your testing setup.

In an A/A test, website visitors are randomly divided into two groups, with each group seeing one of the identical versions. Since there's no difference between the pages, the conversion rates and other metrics should be nearly the same for both versions.

Think of an A/A test as a pre-flight check for your A/B testing experiments. It makes sure your tools are ready to give you trustworthy results.

It can also be a powerful teaching tool — when A/A tests unexpectedly show “statistically significant” results, it’s often the first time teams learn about the dangers of underpowered tests, what p-values actually measure, and how to control error rates. 

Example of a basic A/A test

Let's imagine you run an e-commerce website selling outdoor gear. You've recently adopted a new A/B testing tool to help you make data-driven improvements. Before launching your first experiment, you decide to run an A/A test.

The focus: Your high-traffic homepage is a perfect candidate for an A/A test.

The setup:

  • You create an A/B test in your experimentation tool, without actually making any changes to the “B” variant. This is where the A/A test name comes from — users in each “variation” are getting the exact same experience.  

  • Your A/B testing tool splits your website traffic randomly, sending 50% of visitors to version A and the other 50% to “version B” (which is the same as version A).

The monitoring: You let the test run for a week or two, tracking the metrics you’d be interested in measuring in an actual A/B test. For example, you may measure:

  • Conversion rate: The percentage of visitors who make a purchase.

  • Click/engagement rates: The percentage of visitors who click on key elements you’re interested in measure, e.g. a homepage banner. 

  • Bounce rate: The percentage of people who leave your homepage without interacting further.

The ideal result: Since the versions are identical, you expect to see very little difference in these metrics between the two versions. Small variations are likely normal, but any major differences might signal an issue with your testing tool.

In fact, if your A/B testing tool uses frequentist statistics, you can measure “how surprised you should be” by looking at the p-value for any given difference in metrics. This is exactly what p-values are actually doing — telling us how often we would see results as extreme, or more extreme than, our current observations if the null hypothesis (i.e., no difference between treatments) is true. If this value is very low (say, below 0.05) - we should be very surprised by our current results and think that maybe something is wrong with our setup. 

Possible outcomes and what they mean

  • Outcome one — Similar results: Your data shows no significant difference between the versions. This is a great sign, as it means your A/B testing tool is working correctly so far.

  • Outcome two — Noticeable difference: Let's say “version B” shows a surprisingly higher conversion rate (i.e., the change is statistically significant). This raises a red flag, and you'll need to investigate. Potential causes could be a problem with your tool setup or even unusual audience patterns.

The bottom line: The A/A test acts as a safety net. It gives you confidence in your A/B testing tool (if results align), or it helps you catch potential problems early on before they skew the results of your future "real" experiments. 

If everything is working correctly, then in repeated A/A tests, any given metric should only show a statistically significant change 5% of the time (assuming a p-value threshold of < 0.05)

Why is it important to do A/A tests?

A/A tests are key to ensuring the reliability of your experiments. Here are more reasons why they're so crucial:

  • Trustworthy results: A/A tests expose issues with your testing tools or setup before you invest time and resources in running experiments. 

    Imagine launching an A/B test with a faulty tool
    — any "winning" variation might just be a fluke, not a genuine improvement. That could lead to costly bad decisions and wasted effort.

  • Setting benchmarks: Got a new website or feature?  An A/A test can help you set a baseline conversion rate, understand how it varies, and estimate variances for continuous metrics that can be used in future sample size calculations. Now, when you run future A/B tests, you'll have a clear point of comparison to know if your changes are truly significant.

  • Understanding sample sizes: A/A tests help you figure out how much traffic you need for reliable A/B tests. Seeing how variations behave on an identical page gives you a good idea of the sample size required to detect meaningful differences with future changes.

How often should you run A/A tests?

A/A tests aren't meant to be a constant routine. Here's the ideal approach depending on the scenario you’re in:

  • Implementing a new testing tool: An absolute must. It's your initial quality check to ensure everything is functioning correctly.

  • Major setup changes or new experiment types: If you've significantly overhauled how you run A/B tests, running another A/A test brings peace of mind.

    Similarly, say you want to try using a browser redirect to run an A/B test (i.e., users in the B variant will be redirected to a different URL). By testing this idea with an A/A test first (redirecting users to an identical page), you’ll likely discover that the additional latency and other technical challenges make this approach unreliable. 

  • Data discrepancies: Did you notice your A/B tool's conversion numbers seem way off compared to your analytics? An A/A test can help pinpoint the problem.

A/A testing vs A/B testing

While both types of tests play a role in website improvement, they have very different purposes:

How to set up an A/A test

Let's get practical with a step-by-step breakdown for setting up an A/A test:

Step 1: Define the user groups for testing

  • Choose a high-traffic area: The more visitors/users, the faster you'll reach reliable results. Your website or app's homepage is often a good candidate.

  • Keep it simple: For the most straightforward A/A test, focus on a single webpage or a key element/feature of your app. This will also help focus your troubleshooting if the test uncovers issues.

Step 2: Apply identical conditions to both groups

  • Ensure both variants are exact duplicates: Every detail should match, from images to button colors. Do not introduce any new changes.

  • Tool configuration: Use your A/B testing tool to set up the experiment. You'll need to designate version A as your control and “version B” as the identical variation.

  • Split the traffic: Configure the tool to randomly send 50% of visitors to version A and 50% to “version B.”

Step 3: Run the test for a statistically significant period

  • Patience is key: A/A tests often need a longer run time than regular A/B tests if you want to rule out smaller potential biases. Aim for a week or more, depending on your traffic volume and other sample size calculation inputs.

  • Track your metrics: Choose the key metric you care about most (which should be related to revenue, profit margins, and customer retention — though these may vary depending on your test’s goal). Your A/B tool will track those metrics for both versions.

Step 4: Collect and analyze data to ensure consistent outcomes across groups

  • Look for discrepancies: Ideally, the results between versions A and “B” should be very similar. Any major differences could signal an issue with your tool or test setup.

  • Statistical significance: Your A/B testing tool will usually calculate this for you. It's essential to check whether any differences are truly significant or simply due to random chance.

Extra tips:

  • Some A/B testing tools let you pre-determine a sample size. This can help you get a sense of how long the test might need to run.

  • Consider specific audience segments if relevant (e.g., new vs. returning customers). You might run separate A/A tests for each segment for more precise results.

How to interpret results of an A/A test

Remember, the ideal outcome of an A/A test is no significant difference between your two identical versions. Here's what to focus on when analyzing your data:

  • Checking for statistical differences: Your A/B testing tool will likely provide this calculation. A common threshold is 95% confidence — this means that we’d only expect 5 out of 100 A/A tests to show a difference this large or larger.

  • Look for trends: Even if the difference between versions isn't statistically significant, are there patterns worth noting? Perhaps one variation consistently has a slightly higher bounce rate. This might be worth investigating, even if it's not a “conclusive” result.

  • Unexpected “winners”: If your tool declares a winner despite the identical pages, don't panic. This can happen due to random chance. To be sure, double-check the following:some text

    • Was the test set up correctly? This may sound a little obvious, but the truth is that mistakes happen. Make sure your pages were genuinely identical, and traffic was split evenly.

    • Is your sample size sufficient? Smaller sample sizes can lead to less reliable results.

What do the results mean?

  • Inconclusive results (as expected): This suggests your testing tool is working reliably. You can move forward with confidence when running actual A/B tests.

  • Significant differences: This means it’s time to troubleshoot. Consider these possibilities:some text

    • Tool or setup issue: There might be a configuration error or a bug in your testing tool. For example, incorrect randomization settings could lead to an uneven traffic split or issues with how the tool tracks and records experiment data.

    • Audience differences: Perhaps the random traffic split resulted in groups with different behaviors (e.g., more mobile vs. desktop users in one group).

Extra tip: Don't prematurely end an A/A test the moment you see a slightly higher conversion rate (or any other change in metrics) in one variation. This can lead to an incorrect view of your tool's accuracy.

Advanced A/A Testing

More advanced experimentation practitioners should go even further with A/A tests, running hundreds or simulating thousands of them to regularly check and ensure platform reliability. By plotting the distribution of p-values and checking to make sure it’s uniform, you can avoid disastrous pitfalls that may be caused by statistical choices or assumptions made in your setup. 

For more on advanced A/A testing, we highly recommend reading Chapter 19 in “Trustworthy Online Controlled Experiments” by Ron Kohavi, Diane Tang, and Ya Xu.  

Next steps

You grasp the importance of A/A testing for ensuring reliable and trustworthy results when you launch new features or enhancements. Yet, setting up and executing A/A tests consistently can be challenging. 

Questions likely linger:

Is my A/B testing tool introducing hidden biases?

How can I be certain the results accurately guide my decisions?

Can I trust these insights enough to confidently roll out new features?

This is where using a tool like Eppo becomes essential. As a data-warehouse native experimentation platform, Eppo allows you to run A/A tests with accuracy so you can then use its powerful experimentation tools to conduct your A/B tests with more confidence.

Here's how:

  • Uncompromising accuracy: By sitting directly on top of your data warehouse, Eppo ensures accurate data from your internal source of truth and free of external tool limitations. No more worrying about hidden biases or unreliable results in your A/B tests

  • Make life easier for your data teams: Eppo's user-friendly interface and SQL-based workflows allow data teams to execute A/B tests quickly and efficiently. Keep in mind that Eppo also makes it easy for non-technical users to run experiments, but those who are will thank you.

  • Powerful experimentation tools: Access features like advanced diagnostics, sample size calculators, feature flagging, and detailed knowledge bases to streamline your A/B testing process.

  • Continuous learning: Eppo tracks your results over time, providing a robust foundation for understanding the health of your experiments and the effectiveness of your A/B testing strategy.

Ready to start experimenting and running your own A/A tests with complete confidence?

 Book a Demo and Explore Eppo.

What are A/A tests? Why they matter for reliable A/B testing and how to use them. Learn about setup, result interpretation, and best practices.

Table of contents

Ready for a 360° experimentation platform?
Turn blind launches into trustworthy experiments
See Eppo in Action