Back to blog

Table of contents

Ready for a 360° experimentation platform?
Turn blind launches into trustworthy experiments
See Eppo in Action

Marketing teams interested in running A/B tests face an uphill battle today. Their tool of choice for the last decade, WYSIWYG "visual editors" that execute experiments via easily-implemented javascript snippets, are no longer tenable solutions. A confluence of changes to how websites are coded, and how search engines rank them, are all putting the day of the “visual editor” to rest with increasing speed.

Marketing teams still need to be able to experiment at the speed of ideas, and not be slowed down by reliance on scarce engineering resources. That’s why we need a new solution to help marketers self-sufficiently build experiments.

It’s time for experimentation tools to integrate directly with the CMS that manages the marketing website in the first place. The need for experimentation-specific visual editors disappears when teams can leverage more robust general-purpose tools like Webflow or Contentful, where marketers already have strong familiarity and integrations.

Why don’t the visual editors work anymore?

Modern web architecture is too dynamic for the presumptions of code written by visual editors

In short: visual editors don’t work with some of the most-used development libraries on the web today, like React, which generates dynamic (and often quite generic) class names upon every re-deploy, breaking experiment code generated by visual editors.

The root of this problem has always been present, in the likelihood of conflicts between the production codebase for a website and the code “written” by marketing teams using a WYSIWYG tool. The latter is code written on top of the former - code meant to modify what a user would otherwise see on the page. This is all good and well, until the underlying code changes.

In the example of React’s dynamic selectors, experiments break because visual editors rely on targeting relevant selectors to specify where changes should be made. Those selectors would need to stay the same from the time the experiment is coded, to when it’s concluded, for things to work correctly. Whenever the site is updated, however, React is renaming all the selectors on the site, and the result is that the code a visual editor generates is extremely prone to breaking… on every re-deploy.

The general change over the last few years here is that modern web architectures - like Angular, Vue.js, or Next.js - have underlying code that changes a lot more, and a lot more often. Sometimes the resulting breakage is as innocuous as experiment changes no longer applying (causing data quality issues that may invalidate your results); other times it can wreak havoc, rendering your entire website inaccessible.

We heard repeatedly in our research that visual editors have been relegated to simple headline changes or like-for-like swapping of creative assets. One notable CRO agency even suggested that some background knowledge of coding was a pre-requisite to productively using the visual editor itself, undoing the central premise of removing engineering bottlenecks.

Visual editor approaches slow down websites, damaging SEO efforts

Another reason marketing teams enjoyed visual editors in the past was the independence they offered from the rest of a company’s software development lifecycle - instead keeping experiment code sequestered to a standalone Javascript snippet. But executing experiments this way requires connecting to a 3rd-party server, loading, and executing said snippet on every pageload. The site performance cost this incurs is one of the largest roadblocks to SEO performance a team can face.

On average, these Javascript snippets take hundreds of milliseconds to download - sometimes the single largest third-party resource loaded on a given website. The Third-Party Web dataset even suggests that across 27k tracked instances, the average execution time of an Optimizely snippet is a staggering 745ms.

Third-Party Web data on the average slowdown introduced by Optimizely implementations

Even beyond SEO, this slowdown in site performance may have tangible negative impact on revenue.

Several firms have proven such a causal link - in “Trustworthy Online Controlled Experiments” by Kohavi et al, an entire chapter is dedicated to proving the impact of site slowdowns. Amongst the examples cited, a 100msec slowdown experiment at Amazon in 2006 decreased sales by 1%, and a 2012 study at Bing showed that every 100msec improvement increased revenue by 0.6%. Enough companies have replicated some version of this experiment to make it industry standard knowledge: slow site speed negatively impacts metrics.

Defining a new approach for marketing-built experiments

Marketing teams loved WYSIWYG visual editors because they enabled easily making changes to the website, without having to go through the ENG team or jump through other technical hurdles. In fact, during my time as a consultant in the experimentation space, I saw plenty of teams who used their experiment visual editor for general website editing, as if it was Squarespace or Webflow. Sometimes these activities outpaced actual experiments conducted.

There is one obvious problem with this picture: no A/B testing tool is a best-in-class solution for implementing global, long-lasting website changes.

There’s a different tool that’s already in the marketer’s arsenal for these sorts of website changes: the CMS - literally purpose-built to make website changes accessible and performant. And unlike the slate of challenges facing visual editors in A/B testing tools, changes made inside a CMS like Contentful, Builder.io, or Webflow are not brittle “code on top of code”, nor are they third-party resources that slow down site performance.

We need to get experimentation tools out of the pretend CMS business, and start using the actual tool designed for website changes.

What do CMS-built experiments look like?

Instead of using a visual editor to build changes meant for an A/B test, the goal is for teams to build those changes directly in the CMS, and then flip a switch to turn the new changes into an experiment variation.

As an example, the Eppo <> Contentful integration requires a small one-time engineering setup, after which teams can use the entry ID of any piece of content in Contentful to specify experimental variations. The result is a scaleable approach to no-code experimentation leveraging two best-in-class tools to do exactly what they do best.

Using Contentful entry IDs to quickly configure an experiment in Eppo

The workflow boils down to 6 simple steps:

  1. Any marketer can create a new entry in Contentful for the appropriate content model.
  2. They can then create a new variant in Eppo with no code by simply copying the entry_id from Contentful’s UI.
  3. Specify the traffic allocation desired for the experiment
  4. Easily QA the new content and add screenshots to Eppo for reference.
  5. Launch the experiment in Eppo
  6. Analyze experiment and make rollout decisions like any other Eppo experiment

What the future looks like

We built Eppo from the beginning as the only 100% data warehouse-native experimentation platform because we knew your A/B testing tool shouldn’t be a secondary form of data collection and storage, divorced from your existing source of truth. You should use best-in-class tools to warehouse your data, and layer on a best-in-class experimentation tool.

In the same way, your A/B testing tool shouldn’t be a second-class CMS either. You already have purpose-built tooling to manage your website, why add a brittle and outdated tool on top to do a poor imitation of it?

Deep integrations with CMS tools are the path forward for marketing teams to run A/B tests - enabling experimentation at scale, while being resilient to the modern tech landscape. Because the visual editors wrote code on top of code, the ability to scale the number of simultaneous experiments was always tightly restricted - how many layers of code adapting code can you pile on top before things start to break? Starting at the source of truth (the production codebase generated by the CMS) avoids this problem entirely.

At Eppo, we’re excited to be pioneering this approach and continue to develop our roadmap around enabling marketing experimentation and shipping further integrations and features. If you’re a marketing team ready for the new way of running no-code A/B tests, reach out today.

Back to blog

Marketing teams interested in running A/B tests face an uphill battle today. Their tool of choice for the last decade, WYSIWYG "visual editors" that execute experiments via easily-implemented javascript snippets, are no longer tenable solutions. A confluence of changes to how websites are coded, and how search engines rank them, are all putting the day of the “visual editor” to rest with increasing speed.

Marketing teams still need to be able to experiment at the speed of ideas, and not be slowed down by reliance on scarce engineering resources. That’s why we need a new solution to help marketers self-sufficiently build experiments.

It’s time for experimentation tools to integrate directly with the CMS that manages the marketing website in the first place. The need for experimentation-specific visual editors disappears when teams can leverage more robust general-purpose tools like Webflow or Contentful, where marketers already have strong familiarity and integrations.

Why don’t the visual editors work anymore?

Modern web architecture is too dynamic for the presumptions of code written by visual editors

In short: visual editors don’t work with some of the most-used development libraries on the web today, like React, which generates dynamic (and often quite generic) class names upon every re-deploy, breaking experiment code generated by visual editors.

The root of this problem has always been present, in the likelihood of conflicts between the production codebase for a website and the code “written” by marketing teams using a WYSIWYG tool. The latter is code written on top of the former - code meant to modify what a user would otherwise see on the page. This is all good and well, until the underlying code changes.

In the example of React’s dynamic selectors, experiments break because visual editors rely on targeting relevant selectors to specify where changes should be made. Those selectors would need to stay the same from the time the experiment is coded, to when it’s concluded, for things to work correctly. Whenever the site is updated, however, React is renaming all the selectors on the site, and the result is that the code a visual editor generates is extremely prone to breaking… on every re-deploy.

The general change over the last few years here is that modern web architectures - like Angular, Vue.js, or Next.js - have underlying code that changes a lot more, and a lot more often. Sometimes the resulting breakage is as innocuous as experiment changes no longer applying (causing data quality issues that may invalidate your results); other times it can wreak havoc, rendering your entire website inaccessible.

We heard repeatedly in our research that visual editors have been relegated to simple headline changes or like-for-like swapping of creative assets. One notable CRO agency even suggested that some background knowledge of coding was a pre-requisite to productively using the visual editor itself, undoing the central premise of removing engineering bottlenecks.

Visual editor approaches slow down websites, damaging SEO efforts

Another reason marketing teams enjoyed visual editors in the past was the independence they offered from the rest of a company’s software development lifecycle - instead keeping experiment code sequestered to a standalone Javascript snippet. But executing experiments this way requires connecting to a 3rd-party server, loading, and executing said snippet on every pageload. The site performance cost this incurs is one of the largest roadblocks to SEO performance a team can face.

On average, these Javascript snippets take hundreds of milliseconds to download - sometimes the single largest third-party resource loaded on a given website. The Third-Party Web dataset even suggests that across 27k tracked instances, the average execution time of an Optimizely snippet is a staggering 745ms.

Third-Party Web data on the average slowdown introduced by Optimizely implementations

Even beyond SEO, this slowdown in site performance may have tangible negative impact on revenue.

Several firms have proven such a causal link - in “Trustworthy Online Controlled Experiments” by Kohavi et al, an entire chapter is dedicated to proving the impact of site slowdowns. Amongst the examples cited, a 100msec slowdown experiment at Amazon in 2006 decreased sales by 1%, and a 2012 study at Bing showed that every 100msec improvement increased revenue by 0.6%. Enough companies have replicated some version of this experiment to make it industry standard knowledge: slow site speed negatively impacts metrics.

Defining a new approach for marketing-built experiments

Marketing teams loved WYSIWYG visual editors because they enabled easily making changes to the website, without having to go through the ENG team or jump through other technical hurdles. In fact, during my time as a consultant in the experimentation space, I saw plenty of teams who used their experiment visual editor for general website editing, as if it was Squarespace or Webflow. Sometimes these activities outpaced actual experiments conducted.

There is one obvious problem with this picture: no A/B testing tool is a best-in-class solution for implementing global, long-lasting website changes.

There’s a different tool that’s already in the marketer’s arsenal for these sorts of website changes: the CMS - literally purpose-built to make website changes accessible and performant. And unlike the slate of challenges facing visual editors in A/B testing tools, changes made inside a CMS like Contentful, Builder.io, or Webflow are not brittle “code on top of code”, nor are they third-party resources that slow down site performance.

We need to get experimentation tools out of the pretend CMS business, and start using the actual tool designed for website changes.

What do CMS-built experiments look like?

Instead of using a visual editor to build changes meant for an A/B test, the goal is for teams to build those changes directly in the CMS, and then flip a switch to turn the new changes into an experiment variation.

As an example, the Eppo <> Contentful integration requires a small one-time engineering setup, after which teams can use the entry ID of any piece of content in Contentful to specify experimental variations. The result is a scaleable approach to no-code experimentation leveraging two best-in-class tools to do exactly what they do best.

Using Contentful entry IDs to quickly configure an experiment in Eppo

The workflow boils down to 6 simple steps:

  1. Any marketer can create a new entry in Contentful for the appropriate content model.
  2. They can then create a new variant in Eppo with no code by simply copying the entry_id from Contentful’s UI.
  3. Specify the traffic allocation desired for the experiment
  4. Easily QA the new content and add screenshots to Eppo for reference.
  5. Launch the experiment in Eppo
  6. Analyze experiment and make rollout decisions like any other Eppo experiment

What the future looks like

We built Eppo from the beginning as the only 100% data warehouse-native experimentation platform because we knew your A/B testing tool shouldn’t be a secondary form of data collection and storage, divorced from your existing source of truth. You should use best-in-class tools to warehouse your data, and layer on a best-in-class experimentation tool.

In the same way, your A/B testing tool shouldn’t be a second-class CMS either. You already have purpose-built tooling to manage your website, why add a brittle and outdated tool on top to do a poor imitation of it?

Deep integrations with CMS tools are the path forward for marketing teams to run A/B tests - enabling experimentation at scale, while being resilient to the modern tech landscape. Because the visual editors wrote code on top of code, the ability to scale the number of simultaneous experiments was always tightly restricted - how many layers of code adapting code can you pile on top before things start to break? Starting at the source of truth (the production codebase generated by the CMS) avoids this problem entirely.

At Eppo, we’re excited to be pioneering this approach and continue to develop our roadmap around enabling marketing experimentation and shipping further integrations and features. If you’re a marketing team ready for the new way of running no-code A/B tests, reach out today.

Subscribe to our monthly newsletter

A round-up of articles about experimentation, stats, and solving problems with data.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.