Databricks
Databricks Data Connector Documentation
Last updated
Was this helpful?
Databricks Data Connector Documentation
Last updated
Was this helpful?
Databricks as a connector for federated SQL query against Databricks using or directly from tables.
from
The from
field for the Databricks connector takes the form databricks:catalog.schema.table
where catalog.schema.table
is the fully-qualified path to the table to read from.
name
The dataset name. This will be used as the table name within Spice.
Example:
params
mode
The execution mode for querying against Databricks. The default is spark_connect
. Possible values:
spark_connect
: Use Spark Connect to query against Databricks. Requires a Spark cluster to be available.
delta_lake
: Query directly from Delta Tables. Requires the object store credentials to be provided.
databricks_endpoint
The endpoint of the Databricks instance. Required for both modes.
databricks_cluster_id
The ID of the compute cluster in Databricks to use for the query. Only valid when mode
is spark_connect
.
databricks_use_ssl
If true, use a TLS connection to connect to the Databricks endpoint. Default is true
.
client_timeout
Optional. Applicable only in delta_lake
mode. Specifies timeout for object store operations. Default value is 30s
E.g. client_timeout: 60s
databricks_aws_region
Optional. The AWS region for the S3 object store. E.g. us-west-2
.
databricks_aws_access_key_id
The access key ID for the S3 object store.
databricks_aws_secret_access_key
The secret access key for the S3 object store.
databricks_aws_endpoint
Optional. The endpoint for the S3 object store. E.g. s3.us-west-2.amazonaws.com
.
databricks_azure_storage_account_name
The Azure Storage account name.
databricks_azure_storage_account_key
The Azure Storage key for accessing the storage account.
databricks_azure_storage_client_id
The Service Principal client ID for accessing the storage account.
databricks_azure_storage_client_secret
The Service Principal client secret for accessing the storage account.
databricks_azure_storage_sas_key
The shared access signature key for accessing the storage account.
databricks_azure_storage_endpoint
Optional. The endpoint for the Azure Blob storage account.
google_service_account
Filesystem path to the Google service account JSON key file.
The table below shows the Databricks (mode: delta_lake) data types supported, along with the type mapping to Apache Arrow types in Spice.
STRING
Utf8
BIGINT
Int64
INT
Int32
SMALLINT
Int16
TINYINT
Int8
FLOAT
Float32
DOUBLE
Float64
BOOLEAN
Boolean
BINARY
Binary
DATE
Date32
TIMESTAMP
Timestamp(Microsecond, Some("UTC"))
TIMESTAMP_NTZ
Timestamp(Microsecond, None)
DECIMAL
Decimal128
ARRAY
List
STRUCT
Struct
MAP
Map
Databricks connector (mode: delta_lake) does not support reading Delta tables with the V2Checkpoint
feature enabled. To use the Databricks connector (mode: delta_lake) with such tables, drop the V2Checkpoint
feature by executing the following command:
Memory Considerations
When using the Databricks (mode: delta_lake) Data connector without acceleration, data is loaded into memory during query execution. Ensure sufficient memory is available, including overhead for queries and the runtime, especially with concurrent queries.
The Databricks Connector (mode: spark_connect
) does not yet support streaming query results from Spark.
Use the to reference a secret, e.g. ${secrets:my_token}
.
Configure the connection to the object store when using mode: delta_lake
. Use the to reference a secret, e.g. ${secrets:aws_access_key_id}
.
For more details on dropping Delta table features, refer to the official documentation:
When using mode: spark_connect
, correlated scalar subqueries can only be used in filters, aggregations, projections, and UPDATE/MERGE/DELETE commands.
Memory limitations can be mitigated by storing acceleration data on disk, which is supported by and accelerators by specifying mode: file
.