Companies that use the end-to-end Lakehouse Platform can now run experiments with Eppo.
At Eppo, we believe that experimentation platforms should meet your most trusted data where it lives.
So we are thrilled to announce that we now connect to Databricks, for companies who use their Lakehouse Platform. As with our Snowflake, BigQuery, and Redshift integrations, Eppo’s warehouse-native platform can sit on top of your Databricks instance. All intermediate data, including tables and analyses, will stay in your Databricks, controlled fully by you.
Users love Databricks because it acts as an abstraction layer giving customers greater flexibility about their data sources, and it provides the speed of a data warehouse when querying a data lake.
Databricks users can now benefit from an Eppo platform built on the following key principles:
Our Databricks connection will enable Eppo to better serve data teams, and particularly Machine Learning teams, that run experiments on top of Databricks’ instant, elastic SQL compute.
If you’re interested in learning more about how Eppo integrates with Databricks, get in touch.
Building the Modern Experimentation Stack
The Warehouse-Native Experimentation Workflow
How to Set Up an Experiment in Eppo