Back to blog

At Eppo, we believe that experimentation platforms should meet your most trusted data where it lives.

So we are thrilled to announce that we now connect to Databricks, for companies who use their Lakehouse Platform. As with our Snowflake, BigQuery, and Redshift integrations, Eppo’s warehouse-native platform can sit on top of your Databricks instance. All intermediate data, including tables and analyses, will stay in your Databricks, controlled fully by you.

Users love Databricks because it acts as an abstraction layer giving customers greater flexibility about their data sources, and it provides the speed of a data warehouse when querying a data lake.

Databricks users can now benefit from an Eppo platform built on the following key principles:

  • Standard, Reusable Data Wrangling Logic from established Sources of Truth tables
  • Business-Approved Metric Repository
  • Modern Statistical Methodologies Minimizing Time-to-Insight
  • Easily Digestible Reporting

Our Databricks connection will enable Eppo to better serve data teams, and particularly Machine Learning teams, that run experiments on top of Databricks’ instant, elastic SQL compute.

Find more details in the docs here.

If you’re interested in learning more about how Eppo integrates with Databricks, get in touch.

Subscribe to our monthly newsletter

A round-up of articles about experimentation, stats, and solving problems with data.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.