The Lakehouse without the Tollhouse: Announcing Native Databricks Support
The debate between “Data Warehouse” and “Data Lake” is effectively over. The answer turned out to be “Both,” or more specifically, the Lakehouse.
Databricks and Delta Lake have revolutionized how we treat big data—bringing ACID transactions and strict schemas to the flexibility of the data lake.
But most ELT tools still treat Databricks like a dumb file bucket. They dump CSVs or Parquet files into S3/ADLS and hope for the best, leaving you to manage the cleanup.
Today, Saddle Data becomes fully Lakehouse-native.
We are launching our Databricks Destination, joining Snowflake as our second Enterprise-Ready target.
How We Built It
We didn’t just build a file uploader. We integrated the native Databricks SQL Driver to interact with your Delta tables intelligently.
- ACID Compliance: We respect the transactional nature of Delta. When Saddle reports a sync as “Success,” your data is committed and visible. No dirty reads.
- Native MERGE (Upsert): Just like our Snowflake connector, we support incremental upserts. We don’t just append duplicate rows or force you to do a full refresh. We use the Delta
MERGEcommand to surgically update changed records. - Automatic Table Creation: We automatically create Delta Lake tables optimized for your data types. If the destination table doesn’t exist, we create it for you using
CREATE TABLE ... USING DELTA.
The “Enterprise-Ready” Promise
With the addition of Databricks (alongside our existing Snowflake support), Saddle Data is no longer just for “startups.” We support the two dominant data platforms in the world.
The difference? We don’t charge you a “Volume Tax” for using them.
Whether you are Team Snowflake or Team Databricks, you get the same flat-rate, developer-friendly pipeline.