Skip to content

elca-anh/dbt-databricks

 
 

Repository files navigation

databricks logo dbt logo

Unit Tests Badge Integration Tests Badge

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

The Databricks Lakehouse provides one simple platform to unify all your data, analytics and AI workloads.

dbt-databricks

The dbt-databricks adapter contains all of the code enabling dbt to work with Databricks. This adapter is based off the amazing work done in dbt-spark. Some key features include:

  • Easy setup. No need to install an ODBC driver as the adapter uses pure Python APIs.
  • Open by default. For example, it uses the the open and performant Delta table format by default. This has many benefits, including letting you use MERGE as the the default incremental materialization strategy.
  • Support for Unity Catalog. dbt-databricks>=1.1.1 supports the 3-level namespace of Unity Catalog (catalog / schema / relations) so you can organize and secure your data the way you like.
  • Performance. The adapter generates SQL expressions that are automatically accelerated by the native, vectorized Photon execution engine.

Choosing between dbt-databricks and dbt-spark

If you are developing a dbt project on Databricks, we recommend using dbt-databricks for the reasons noted above.

dbt-spark is an actively developed adapter which works with Databricks as well as Apache Spark anywhere it is hosted e.g. on AWS EMR.

Getting started

Installation

Install using pip:

pip install dbt-databricks

Upgrade to the latest version

pip install --upgrade dbt-databricks

Profile Setup

your_profile_name:
  target: dev
  outputs:
    dev:
      type: databricks
      catalog: [optional catalog name, if you are using Unity Catalog, only available in dbt-databricks>=1.1.1]
      schema: [database/schema name]
      host: [your.databrickshost.com]
      http_path: [/sql/your/http/path]
      token: [dapiXXXXXXXXXXXXXXXXXXXXXXX]

Quick Starts

These following quick starts will get you up and running with the dbt-databricks adapter:

Compatibility

The dbt-databricks adapter has been tested:

  • with Python 3.7 or above.
  • against Databricks SQL and Databricks runtime releases 9.1 LTS and later.

Tips and Tricks

Choosing compute for a Python model

You can override the compute used for a specific Python model by setting the http_path property in model configuration. This can be useful if, for example, you want to run a Python model on an All Purpose cluster, while running SQL models on a SQL Warehouse. Note that this capability is only available for Python models.

def model(dbt, session):
    dbt.config(
      http_path="sql/protocolv1/..."
    )

About

A dbt adapter for Databricks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%