Job
laktory.models.resources.databricks.Job
¤
Bases: JobBase
Databricks Job
Examples:
import io
from laktory import models
# Define job
job_yaml = '''
name: job-stock-prices
job_clusters:
- job_cluster_key: main
new_cluster:
spark_version: 16.3.x-scala2.12
node_type_id: Standard_DS3_v2
tasks:
- task_key: ingest
job_cluster_key: main
notebook_task:
notebook_path: /jobs/ingest_stock_prices.py
libraries:
- pypi:
package: yfinance
- task_key: pipeline
depends_on:
- task_key: ingest
pipeline_task:
pipeline_id: 74900655-3641-49f1-8323-b8507f0e3e3b
access_controls:
- group_name: account users
permission_level: CAN_VIEW
- group_name: role-engineers
permission_level: CAN_MANAGE_RUN
'''
job = models.resources.databricks.Job.model_validate_yaml(io.StringIO(job_yaml))
# Define job with for each task
job_yaml = '''
name: job-hello
tasks:
- task_key: hello-loop
for_each_task:
inputs: "[{'id':1, 'name': 'olivier'}, {'id':2, 'name': 'kubic'}]"
task:
task_key: hello-task
notebook_task:
notebook_path: /Workspace/Users/olivier.soucy@okube.ai/hello-world
base_parameters:
input: "{{input}}"
'''
job = models.resources.databricks.Job.model_validate_yaml(io.StringIO(job_yaml))
References
| BASE | DESCRIPTION |
|---|---|
always_running
|
(Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with
TYPE:
|
budget_policy_id
|
The ID of the user-specified budget policy to use for this job. If not specified, a default budget policy may be applied when creating or modifying the job
TYPE:
|
continuous
|
TYPE:
|
control_run_state
|
(Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the
TYPE:
|
dbt_task
|
TYPE:
|
deployment
|
TYPE:
|
description
|
description for this task
TYPE:
|
edit_mode
|
If
TYPE:
|
email_notifications
|
An optional block to specify a set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This block is documented below
TYPE:
|
environment
|
TYPE:
|
existing_cluster_id
|
Identifier of the interactive cluster to run job on. Note: running tasks on interactive clusters may lead to increased costs!
TYPE:
|
format
|
TYPE:
|
git_source
|
Specifies the a Git repository for task source code. See git_source Configuration Block below
TYPE:
|
health
|
block described below that specifies health conditions for a given task
TYPE:
|
job_cluster
|
A list of job databricks_cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. Multi-task syntax
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
max_concurrent_runs
|
(Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to 1
TYPE:
|
max_retries
|
(Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a
TYPE:
|
min_retry_interval_millis
|
(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried
TYPE:
|
name
|
The name of the defined parameter. May only contain alphanumeric characters,
TYPE:
|
new_cluster
|
Block with almost the same set of parameters as for databricks_cluster resource, except following (check the REST API documentation for full list of supported parameters):
TYPE:
|
notebook_task
|
TYPE:
|
notification_settings
|
An optional block controlling the notification settings on the job level documented below
TYPE:
|
parameter
|
Specifies job parameter for the job. See parameter Configuration Block
TYPE:
|
performance_target
|
The performance mode on a serverless job. The performance target determines the level of compute performance or cost-efficiency for the run. Supported values are: *
TYPE:
|
pipeline_task
|
TYPE:
|
python_wheel_task
|
TYPE:
|
queue
|
The queue status for the job. See queue Configuration Block below
TYPE:
|
retry_on_timeout
|
(Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout
TYPE:
|
run_as
|
The user or the service principal the job runs as. See run_as Configuration Block below
TYPE:
|
run_job_task
|
TYPE:
|
schedule
|
An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. See schedule Configuration Block below
TYPE:
|
spark_jar_task
|
TYPE:
|
spark_python_task
|
TYPE:
|
spark_submit_task
|
TYPE:
|
tags
|
An optional map of the tags associated with the job. See tags Configuration Map
TYPE:
|
task
|
Task to run against the
TYPE:
|
timeout_seconds
|
(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout
TYPE:
|
timeouts
|
TYPE:
|
trigger
|
The conditions that triggers the job to start. See trigger Configuration Block below. *
TYPE:
|
usage_policy_id
|
TYPE:
|
webhook_notifications
|
(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below
TYPE:
|
| LAKTORY | DESCRIPTION |
|---|---|
access_controls
|
Access controls list
TYPE:
|
name_prefix
|
Prefix added to the job name
TYPE:
|
name_suffix
|
Suffix added to the job name
TYPE:
|
| ATTRIBUTE | DESCRIPTION |
|---|---|
additional_core_resources |
TYPE:
|
additional_core_resources
property
¤
- permissions
laktory.models.resources.databricks.job.JobContinuous
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
pause_status
|
Indicate whether this trigger is paused or not. Either
TYPE:
|
task_retry_mode
|
Controls task level retry behaviour. Allowed values are: *
TYPE:
|
laktory.models.resources.databricks.job.JobDbtTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
catalog
|
The name of the catalog to use inside Unity Catalog
TYPE:
|
commands
|
(Array) Series of dbt commands to execute in sequence. Every command must start with 'dbt'
TYPE:
|
profiles_directory
|
The relative path to the directory in the repository specified by
TYPE:
|
project_directory
|
The path where dbt should look for
TYPE:
|
schema_
|
The name of the schema dbt should run in. Defaults to
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobDeployment
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
kind
|
TYPE:
|
metadata_file_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobEmailNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobEnvironment
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
environment_key
|
an unique identifier of the Environment. It will be referenced from
TYPE:
|
spec
|
block describing the Environment. Consists of following attributes:
TYPE:
|
laktory.models.resources.databricks.job.JobEnvironmentSpec
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
base_environment
|
TYPE:
|
client
|
TYPE:
|
dependencies
|
(list of strings) List of pip dependencies, as supported by the version of pip in this environment. Each dependency is a pip requirement file line. See API docs for more information
TYPE:
|
environment_version
|
client version used by the environment. Each version comes with a specific Python version and a set of Python packages
TYPE:
|
java_dependencies
|
TYPE:
|
laktory.models.resources.databricks.job.JobGitSource
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
branch
|
name of the Git branch to use. Conflicts with
TYPE:
|
commit
|
hash of Git commit to use. Conflicts with
TYPE:
|
git_snapshot
|
TYPE:
|
job_source
|
TYPE:
|
provider
|
case insensitive name of the Git provider. Following values are supported right now (could be a subject for change, consult Repos API documentation):
TYPE:
|
sparse_checkout
|
TYPE:
|
tag
|
name of the Git branch to use. Conflicts with
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
laktory.models.resources.databricks.job.JobGitSourceGitSnapshot
¤
laktory.models.resources.databricks.job.JobGitSourceJobSource
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dirty_state
|
TYPE:
|
import_from_git_branch
|
TYPE:
|
job_config_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobGitSourceSparseCheckout
¤
laktory.models.resources.databricks.job.JobHealth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
rules
|
(List) list of rules that are represented as objects with the following attributes:
TYPE:
|
laktory.models.resources.databricks.job.JobHealthRules
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
metric
|
string specifying the metric to check, like
TYPE:
|
op
|
string specifying the operation used to evaluate the given metric. The only supported operation is
TYPE:
|
value
|
integer value used to compare to the given metric
TYPE:
|
laktory.models.resources.databricks.job.JobJobCluster
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
job_cluster_key
|
Identifier that can be referenced in
TYPE:
|
new_cluster
|
Block with almost the same set of parameters as for databricks_cluster resource, except following (check the REST API documentation for full list of supported parameters):
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewCluster
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
apply_policy_default_values
|
TYPE:
|
autoscale
|
TYPE:
|
aws_attributes
|
TYPE:
|
azure_attributes
|
TYPE:
|
cluster_id
|
TYPE:
|
cluster_log_conf
|
TYPE:
|
cluster_mount_info
|
TYPE:
|
cluster_name
|
TYPE:
|
custom_tags
|
TYPE:
|
data_security_mode
|
TYPE:
|
docker_image
|
TYPE:
|
driver_instance_pool_id
|
TYPE:
|
driver_node_type_flexibility
|
TYPE:
|
driver_node_type_id
|
TYPE:
|
enable_elastic_disk
|
TYPE:
|
enable_local_disk_encryption
|
TYPE:
|
gcp_attributes
|
TYPE:
|
idempotency_token
|
TYPE:
|
init_scripts
|
TYPE:
|
instance_pool_id
|
TYPE:
|
is_single_node
|
TYPE:
|
kind
|
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
node_type_id
|
TYPE:
|
num_workers
|
TYPE:
|
policy_id
|
TYPE:
|
remote_disk_throughput
|
TYPE:
|
runtime_engine
|
TYPE:
|
single_user_name
|
TYPE:
|
spark_conf
|
TYPE:
|
spark_env_vars
|
TYPE:
|
spark_version
|
TYPE:
|
ssh_public_keys
|
TYPE:
|
total_initial_remote_disk_size
|
TYPE:
|
use_ml_runtime
|
TYPE:
|
worker_node_type_flexibility
|
TYPE:
|
workload_type
|
isn't supported
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterAutoscale
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
max_workers
|
TYPE:
|
min_workers
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterAwsAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
ebs_volume_count
|
TYPE:
|
ebs_volume_iops
|
TYPE:
|
ebs_volume_size
|
TYPE:
|
ebs_volume_throughput
|
TYPE:
|
ebs_volume_type
|
TYPE:
|
first_on_demand
|
TYPE:
|
instance_profile_arn
|
TYPE:
|
spot_bid_price_percent
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterAzureAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
first_on_demand
|
TYPE:
|
log_analytics_info
|
TYPE:
|
spot_bid_max_price
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterAzureAttributesLogAnalyticsInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
log_analytics_primary_key
|
TYPE:
|
log_analytics_workspace_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterLogConf
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbfs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterLogConfDbfs
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterLogConfS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterLogConfVolumes
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterMountInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
local_mount_dir_path
|
TYPE:
|
network_filesystem_info
|
TYPE:
|
remote_mount_dir_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
mount_options
|
TYPE:
|
server_address
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterDockerImage
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
basic_auth
|
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterDockerImageBasicAuth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
password
|
TYPE:
|
username
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterDriverNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterGcpAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
boot_disk_size
|
TYPE:
|
first_on_demand
|
TYPE:
|
google_service_account
|
TYPE:
|
local_ssd_count
|
TYPE:
|
use_preemptible_executors
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScripts
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
abfss
|
TYPE:
|
dbfs
|
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
gcs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
workspace
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsAbfss
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsDbfs
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsFile
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsGcs
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsVolumes
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterInitScriptsWorkspace
¤
laktory.models.resources.databricks.job.JobJobClusterNewClusterLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterWorkerNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterWorkloadType
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clients
|
TYPE:
|
laktory.models.resources.databricks.job.JobJobClusterNewClusterWorkloadTypeClients
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jobs
|
TYPE:
|
notebooks
|
TYPE:
|
laktory.models.resources.databricks.job.JobLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobLookup
¤
Bases: ResourceLookup
| PARAMETER | DESCRIPTION |
|---|---|
id
|
The id of the databricks job
TYPE:
|
laktory.models.resources.databricks.job.JobNewCluster
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
apply_policy_default_values
|
TYPE:
|
autoscale
|
TYPE:
|
aws_attributes
|
TYPE:
|
azure_attributes
|
TYPE:
|
cluster_id
|
TYPE:
|
cluster_log_conf
|
TYPE:
|
cluster_mount_info
|
TYPE:
|
cluster_name
|
TYPE:
|
custom_tags
|
TYPE:
|
data_security_mode
|
TYPE:
|
docker_image
|
TYPE:
|
driver_instance_pool_id
|
TYPE:
|
driver_node_type_flexibility
|
TYPE:
|
driver_node_type_id
|
TYPE:
|
enable_elastic_disk
|
TYPE:
|
enable_local_disk_encryption
|
TYPE:
|
gcp_attributes
|
TYPE:
|
idempotency_token
|
TYPE:
|
init_scripts
|
TYPE:
|
instance_pool_id
|
TYPE:
|
is_single_node
|
TYPE:
|
kind
|
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
node_type_id
|
TYPE:
|
num_workers
|
TYPE:
|
policy_id
|
TYPE:
|
remote_disk_throughput
|
TYPE:
|
runtime_engine
|
TYPE:
|
single_user_name
|
TYPE:
|
spark_conf
|
TYPE:
|
spark_env_vars
|
TYPE:
|
spark_version
|
TYPE:
|
ssh_public_keys
|
TYPE:
|
total_initial_remote_disk_size
|
TYPE:
|
use_ml_runtime
|
TYPE:
|
worker_node_type_flexibility
|
TYPE:
|
workload_type
|
isn't supported
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterAutoscale
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
max_workers
|
TYPE:
|
min_workers
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterAwsAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
ebs_volume_count
|
TYPE:
|
ebs_volume_iops
|
TYPE:
|
ebs_volume_size
|
TYPE:
|
ebs_volume_throughput
|
TYPE:
|
ebs_volume_type
|
TYPE:
|
first_on_demand
|
TYPE:
|
instance_profile_arn
|
TYPE:
|
spot_bid_price_percent
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterAzureAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
first_on_demand
|
TYPE:
|
log_analytics_info
|
TYPE:
|
spot_bid_max_price
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterAzureAttributesLogAnalyticsInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
log_analytics_primary_key
|
TYPE:
|
log_analytics_workspace_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterClusterLogConf
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbfs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterClusterLogConfDbfs
¤
laktory.models.resources.databricks.job.JobNewClusterClusterLogConfS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterClusterLogConfVolumes
¤
laktory.models.resources.databricks.job.JobNewClusterClusterMountInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
local_mount_dir_path
|
TYPE:
|
network_filesystem_info
|
TYPE:
|
remote_mount_dir_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterClusterMountInfoNetworkFilesystemInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
mount_options
|
TYPE:
|
server_address
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterDockerImage
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
basic_auth
|
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterDockerImageBasicAuth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
password
|
TYPE:
|
username
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterDriverNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterGcpAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
boot_disk_size
|
TYPE:
|
first_on_demand
|
TYPE:
|
google_service_account
|
TYPE:
|
local_ssd_count
|
TYPE:
|
use_preemptible_executors
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterInitScripts
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
abfss
|
TYPE:
|
dbfs
|
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
gcs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
workspace
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterInitScriptsAbfss
¤
laktory.models.resources.databricks.job.JobNewClusterInitScriptsDbfs
¤
laktory.models.resources.databricks.job.JobNewClusterInitScriptsFile
¤
laktory.models.resources.databricks.job.JobNewClusterInitScriptsGcs
¤
laktory.models.resources.databricks.job.JobNewClusterInitScriptsS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterInitScriptsVolumes
¤
laktory.models.resources.databricks.job.JobNewClusterInitScriptsWorkspace
¤
laktory.models.resources.databricks.job.JobNewClusterLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterWorkerNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterWorkloadType
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clients
|
TYPE:
|
laktory.models.resources.databricks.job.JobNewClusterWorkloadTypeClients
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jobs
|
TYPE:
|
notebooks
|
TYPE:
|
laktory.models.resources.databricks.job.JobNotebookTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
base_parameters
|
(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
TYPE:
|
notebook_path
|
The path of the databricks_notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobNotificationSettings
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
no_alert_for_canceled_runs
|
(Bool) don't send alert for cancelled runs
TYPE:
|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
laktory.models.resources.databricks.job.JobParameter
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
default
|
Default value of the parameter
TYPE:
|
name
|
The name of the defined parameter. May only contain alphanumeric characters,
TYPE:
|
laktory.models.resources.databricks.job.JobPipelineTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
full_refresh
|
(Bool) Specifies if there should be full refresh of the pipeline
TYPE:
|
pipeline_id
|
The pipeline's unique ID
TYPE:
|
laktory.models.resources.databricks.job.JobPythonWheelTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
entry_point
|
Python function as entry point for the task
TYPE:
|
named_parameters
|
Named parameters for the task
TYPE:
|
package_name
|
Name of Python package
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobQueue
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
enabled
|
If true, enable queueing for the job
TYPE:
|
laktory.models.resources.databricks.job.JobRunAs
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
group_name
|
TYPE:
|
service_principal_name
|
The application ID of an active service principal. Setting this field requires the
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobRunJobTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
job_id
|
(String) ID of the job
TYPE:
|
job_parameters
|
(Map) Job parameters for the task
TYPE:
|
laktory.models.resources.databricks.job.JobSchedule
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
pause_status
|
Indicate whether this trigger is paused or not. Either
TYPE:
|
quartz_cron_expression
|
A Cron expression using Quartz syntax that describes the schedule for a job. This field is required
TYPE:
|
timezone_id
|
A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required
TYPE:
|
laktory.models.resources.databricks.job.JobSparkJarTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jar_uri
|
TYPE:
|
main_class_name
|
The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobSparkPythonTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
python_file
|
The URI of the Python file to be executed. Cloud file URIs (e.g.
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
laktory.models.resources.databricks.job.JobSparkSubmitTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_task
|
TYPE:
|
clean_rooms_notebook_task
|
TYPE:
|
compute
|
Task level compute configuration. This block is documented below
TYPE:
|
condition_task
|
TYPE:
|
dashboard_task
|
TYPE:
|
dbt_cloud_task
|
TYPE:
|
dbt_platform_task
|
TYPE:
|
dbt_task
|
TYPE:
|
depends_on
|
block specifying dependency(-ies) for a given task
TYPE:
|
description
|
description for this task
TYPE:
|
disable_auto_optimization
|
A flag to disable auto optimization in serverless tasks
TYPE:
|
disabled
|
TYPE:
|
email_notifications
|
An optional block to specify a set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This block is documented below
TYPE:
|
environment_key
|
an unique identifier of the Environment. It will be referenced from
TYPE:
|
existing_cluster_id
|
Identifier of the interactive cluster to run job on. Note: running tasks on interactive clusters may lead to increased costs!
TYPE:
|
for_each_task
|
TYPE:
|
gen_ai_compute_task
|
TYPE:
|
health
|
block described below that specifies health conditions for a given task
TYPE:
|
job_cluster_key
|
Identifier that can be referenced in
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
max_retries
|
(Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a
TYPE:
|
min_retry_interval_millis
|
(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried
TYPE:
|
new_cluster
|
Block with almost the same set of parameters as for databricks_cluster resource, except following (check the REST API documentation for full list of supported parameters):
TYPE:
|
notebook_task
|
TYPE:
|
notification_settings
|
An optional block controlling the notification settings on the job level documented below
TYPE:
|
pipeline_task
|
TYPE:
|
power_bi_task
|
TYPE:
|
python_wheel_task
|
TYPE:
|
retry_on_timeout
|
(Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout
TYPE:
|
run_if
|
An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. One of
TYPE:
|
run_job_task
|
TYPE:
|
spark_jar_task
|
TYPE:
|
spark_python_task
|
TYPE:
|
spark_submit_task
|
TYPE:
|
sql_task
|
TYPE:
|
task_key
|
The name of the task this task depends on
TYPE:
|
timeout_seconds
|
(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout
TYPE:
|
webhook_notifications
|
(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below
TYPE:
|
laktory.models.resources.databricks.job.JobTaskAlertTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_id
|
(String) identifier of the Databricks Alert (databricks_alert)
TYPE:
|
subscribers
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
workspace_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskAlertTaskSubscribers
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskCleanRoomsNotebookTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clean_room_name
|
TYPE:
|
etag
|
TYPE:
|
notebook_base_parameters
|
TYPE:
|
notebook_name
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskCompute
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
hardware_accelerator
|
Hardware accelerator configuration for Serverless GPU workloads. Supported values are: *
TYPE:
|
laktory.models.resources.databricks.job.JobTaskConditionTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
left
|
The left operand of the condition task. It could be a string value, job state, or a parameter reference
TYPE:
|
op
|
string specifying the operation used to evaluate the given metric. The only supported operation is
TYPE:
|
right
|
The right operand of the condition task. It could be a string value, job state, or parameter reference
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDashboardTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dashboard_id
|
(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard
TYPE:
|
filters
|
TYPE:
|
subscription
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDashboardTaskSubscription
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
custom_subject
|
string specifying a custom subject of email sent
TYPE:
|
paused
|
TYPE:
|
subscribers
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDashboardTaskSubscriptionSubscribers
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDbtCloudTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
dbt_cloud_job_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDbtPlatformTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
dbt_platform_job_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDbtTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
catalog
|
The name of the catalog to use inside Unity Catalog
TYPE:
|
commands
|
(Array) Series of dbt commands to execute in sequence. Every command must start with 'dbt'
TYPE:
|
profiles_directory
|
The relative path to the directory in the repository specified by
TYPE:
|
project_directory
|
The path where dbt should look for
TYPE:
|
schema_
|
The name of the schema dbt should run in. Defaults to
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskDependsOn
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
outcome
|
Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are
TYPE:
|
task_key
|
The name of the task this task depends on
TYPE:
|
laktory.models.resources.databricks.job.JobTaskEmailNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
concurrency
|
Controls the number of active iteration task runs. Default is 20, maximum allowed is 100
TYPE:
|
inputs
|
(String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter
TYPE:
|
task
|
Task to run against the
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_task
|
TYPE:
|
clean_rooms_notebook_task
|
TYPE:
|
compute
|
Task level compute configuration. This block is documented below
TYPE:
|
condition_task
|
TYPE:
|
dashboard_task
|
TYPE:
|
dbt_cloud_task
|
TYPE:
|
dbt_platform_task
|
TYPE:
|
dbt_task
|
TYPE:
|
depends_on
|
block specifying dependency(-ies) for a given task
TYPE:
|
description
|
description for this task
TYPE:
|
disable_auto_optimization
|
A flag to disable auto optimization in serverless tasks
TYPE:
|
disabled
|
TYPE:
|
email_notifications
|
An optional block to specify a set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This block is documented below
TYPE:
|
environment_key
|
an unique identifier of the Environment. It will be referenced from
TYPE:
|
existing_cluster_id
|
Identifier of the interactive cluster to run job on. Note: running tasks on interactive clusters may lead to increased costs!
TYPE:
|
gen_ai_compute_task
|
TYPE:
|
health
|
block described below that specifies health conditions for a given task
TYPE:
|
job_cluster_key
|
Identifier that can be referenced in
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
max_retries
|
(Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a
TYPE:
|
min_retry_interval_millis
|
(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried
TYPE:
|
new_cluster
|
Block with almost the same set of parameters as for databricks_cluster resource, except following (check the REST API documentation for full list of supported parameters):
TYPE:
|
notebook_task
|
TYPE:
|
notification_settings
|
An optional block controlling the notification settings on the job level documented below
TYPE:
|
pipeline_task
|
TYPE:
|
power_bi_task
|
TYPE:
|
python_wheel_task
|
TYPE:
|
retry_on_timeout
|
(Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout
TYPE:
|
run_if
|
An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. One of
TYPE:
|
run_job_task
|
TYPE:
|
spark_jar_task
|
TYPE:
|
spark_python_task
|
TYPE:
|
spark_submit_task
|
TYPE:
|
sql_task
|
TYPE:
|
task_key
|
The name of the task this task depends on
TYPE:
|
timeout_seconds
|
(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout
TYPE:
|
webhook_notifications
|
(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskAlertTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_id
|
(String) identifier of the Databricks Alert (databricks_alert)
TYPE:
|
subscribers
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
workspace_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskAlertTaskSubscribers
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskCleanRoomsNotebookTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clean_room_name
|
TYPE:
|
etag
|
TYPE:
|
notebook_base_parameters
|
TYPE:
|
notebook_name
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskCompute
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
hardware_accelerator
|
Hardware accelerator configuration for Serverless GPU workloads. Supported values are: *
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskConditionTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
left
|
The left operand of the condition task. It could be a string value, job state, or a parameter reference
TYPE:
|
op
|
string specifying the operation used to evaluate the given metric. The only supported operation is
TYPE:
|
right
|
The right operand of the condition task. It could be a string value, job state, or parameter reference
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDashboardTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dashboard_id
|
(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard
TYPE:
|
filters
|
TYPE:
|
subscription
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDashboardTaskSubscription
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
custom_subject
|
string specifying a custom subject of email sent
TYPE:
|
paused
|
TYPE:
|
subscribers
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDashboardTaskSubscriptionSubscribers
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDbtCloudTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
dbt_cloud_job_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDbtPlatformTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
dbt_platform_job_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDbtTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
catalog
|
The name of the catalog to use inside Unity Catalog
TYPE:
|
commands
|
(Array) Series of dbt commands to execute in sequence. Every command must start with 'dbt'
TYPE:
|
profiles_directory
|
The relative path to the directory in the repository specified by
TYPE:
|
project_directory
|
The path where dbt should look for
TYPE:
|
schema_
|
The name of the schema dbt should run in. Defaults to
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskDependsOn
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
outcome
|
Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are
TYPE:
|
task_key
|
The name of the task this task depends on
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskEmailNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskGenAiComputeTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
command
|
TYPE:
|
compute
|
Task level compute configuration. This block is documented below
TYPE:
|
dl_runtime_image
|
TYPE:
|
mlflow_experiment_name
|
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
training_script_path
|
TYPE:
|
yaml_parameters
|
TYPE:
|
yaml_parameters_file_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskGenAiComputeTaskCompute
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
gpu_node_pool_id
|
TYPE:
|
gpu_type
|
TYPE:
|
num_gpus
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskHealth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
rules
|
(List) list of rules that are represented as objects with the following attributes:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskHealthRules
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
metric
|
string specifying the metric to check, like
TYPE:
|
op
|
string specifying the operation used to evaluate the given metric. The only supported operation is
TYPE:
|
value
|
integer value used to compare to the given metric
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewCluster
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
apply_policy_default_values
|
TYPE:
|
autoscale
|
TYPE:
|
aws_attributes
|
TYPE:
|
azure_attributes
|
TYPE:
|
cluster_id
|
TYPE:
|
cluster_log_conf
|
TYPE:
|
cluster_mount_info
|
TYPE:
|
cluster_name
|
TYPE:
|
custom_tags
|
TYPE:
|
data_security_mode
|
TYPE:
|
docker_image
|
TYPE:
|
driver_instance_pool_id
|
TYPE:
|
driver_node_type_flexibility
|
TYPE:
|
driver_node_type_id
|
TYPE:
|
enable_elastic_disk
|
TYPE:
|
enable_local_disk_encryption
|
TYPE:
|
gcp_attributes
|
TYPE:
|
idempotency_token
|
TYPE:
|
init_scripts
|
TYPE:
|
instance_pool_id
|
TYPE:
|
is_single_node
|
TYPE:
|
kind
|
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
node_type_id
|
TYPE:
|
num_workers
|
TYPE:
|
policy_id
|
TYPE:
|
remote_disk_throughput
|
TYPE:
|
runtime_engine
|
TYPE:
|
single_user_name
|
TYPE:
|
spark_conf
|
TYPE:
|
spark_env_vars
|
TYPE:
|
spark_version
|
TYPE:
|
ssh_public_keys
|
TYPE:
|
total_initial_remote_disk_size
|
TYPE:
|
use_ml_runtime
|
TYPE:
|
worker_node_type_flexibility
|
TYPE:
|
workload_type
|
isn't supported
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterAutoscale
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
max_workers
|
TYPE:
|
min_workers
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterAwsAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
ebs_volume_count
|
TYPE:
|
ebs_volume_iops
|
TYPE:
|
ebs_volume_size
|
TYPE:
|
ebs_volume_throughput
|
TYPE:
|
ebs_volume_type
|
TYPE:
|
first_on_demand
|
TYPE:
|
instance_profile_arn
|
TYPE:
|
spot_bid_price_percent
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterAzureAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
first_on_demand
|
TYPE:
|
log_analytics_info
|
TYPE:
|
spot_bid_max_price
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterAzureAttributesLogAnalyticsInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
log_analytics_primary_key
|
TYPE:
|
log_analytics_workspace_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterLogConf
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbfs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterLogConfS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterLogConfVolumes
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterMountInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
local_mount_dir_path
|
TYPE:
|
network_filesystem_info
|
TYPE:
|
remote_mount_dir_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
mount_options
|
TYPE:
|
server_address
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterDockerImage
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
basic_auth
|
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
password
|
TYPE:
|
username
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterDriverNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterGcpAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
boot_disk_size
|
TYPE:
|
first_on_demand
|
TYPE:
|
google_service_account
|
TYPE:
|
local_ssd_count
|
TYPE:
|
use_preemptible_executors
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScripts
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
abfss
|
TYPE:
|
dbfs
|
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
gcs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
workspace
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsAbfss
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsDbfs
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsFile
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsGcs
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsVolumes
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterInitScriptsWorkspace
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterWorkerNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterWorkloadType
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clients
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNewClusterWorkloadTypeClients
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jobs
|
TYPE:
|
notebooks
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNotebookTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
base_parameters
|
(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
TYPE:
|
notebook_path
|
The path of the databricks_notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskNotificationSettings
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_on_last_attempt
|
(Bool) do not send notifications to recipients specified in
TYPE:
|
no_alert_for_canceled_runs
|
(Bool) don't send alert for cancelled runs
TYPE:
|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskPipelineTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
full_refresh
|
(Bool) Specifies if there should be full refresh of the pipeline
TYPE:
|
pipeline_id
|
The pipeline's unique ID
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskPowerBiTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
power_bi_model
|
TYPE:
|
refresh_after_update
|
TYPE:
|
tables
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskPowerBiTaskPowerBiModel
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
authentication_method
|
TYPE:
|
model_name
|
TYPE:
|
overwrite_existing
|
TYPE:
|
storage_mode
|
TYPE:
|
workspace_name
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskPowerBiTaskTables
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
catalog
|
The name of the catalog to use inside Unity Catalog
TYPE:
|
name
|
The name of the defined parameter. May only contain alphanumeric characters,
TYPE:
|
schema_
|
The name of the schema dbt should run in. Defaults to
TYPE:
|
storage_mode
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskPythonWheelTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
entry_point
|
Python function as entry point for the task
TYPE:
|
named_parameters
|
Named parameters for the task
TYPE:
|
package_name
|
Name of Python package
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskRunJobTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbt_commands
|
TYPE:
|
jar_params
|
TYPE:
|
job_id
|
(String) ID of the job
TYPE:
|
job_parameters
|
(Map) Job parameters for the task
TYPE:
|
notebook_params
|
TYPE:
|
pipeline_params
|
TYPE:
|
python_named_params
|
TYPE:
|
python_params
|
TYPE:
|
spark_submit_params
|
TYPE:
|
sql_params
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskRunJobTaskPipelineParams
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
full_refresh
|
(Bool) Specifies if there should be full refresh of the pipeline
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSparkJarTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jar_uri
|
TYPE:
|
main_class_name
|
The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
run_as_repl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSparkPythonTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
python_file
|
The URI of the Python file to be executed. Cloud file URIs (e.g.
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSparkSubmitTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert
|
block consisting of following fields:
TYPE:
|
dashboard
|
block consisting of following fields:
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
query
|
block consisting of single string field:
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskAlert
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_id
|
(String) identifier of the Databricks Alert (databricks_alert)
TYPE:
|
pause_subscriptions
|
flag that specifies if subscriptions are paused or not
TYPE:
|
subscriptions
|
a list of subscription blocks consisting out of one of the required fields:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskAlertSubscriptions
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskDashboard
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
custom_subject
|
string specifying a custom subject of email sent
TYPE:
|
dashboard_id
|
(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard
TYPE:
|
pause_subscriptions
|
flag that specifies if subscriptions are paused or not
TYPE:
|
subscriptions
|
a list of subscription blocks consisting out of one of the required fields:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskDashboardSubscriptions
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskFile
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
path
|
If
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskSqlTaskQuery
¤
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotificationsOnFailure
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotificationsOnStart
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskForEachTaskTaskWebhookNotificationsOnSuccess
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskGenAiComputeTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
command
|
TYPE:
|
compute
|
Task level compute configuration. This block is documented below
TYPE:
|
dl_runtime_image
|
TYPE:
|
mlflow_experiment_name
|
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
training_script_path
|
TYPE:
|
yaml_parameters
|
TYPE:
|
yaml_parameters_file_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskGenAiComputeTaskCompute
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
gpu_node_pool_id
|
TYPE:
|
gpu_type
|
TYPE:
|
num_gpus
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskHealth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
rules
|
(List) list of rules that are represented as objects with the following attributes:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskHealthRules
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
metric
|
string specifying the metric to check, like
TYPE:
|
op
|
string specifying the operation used to evaluate the given metric. The only supported operation is
TYPE:
|
value
|
integer value used to compare to the given metric
TYPE:
|
laktory.models.resources.databricks.job.JobTaskLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewCluster
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
apply_policy_default_values
|
TYPE:
|
autoscale
|
TYPE:
|
aws_attributes
|
TYPE:
|
azure_attributes
|
TYPE:
|
cluster_id
|
TYPE:
|
cluster_log_conf
|
TYPE:
|
cluster_mount_info
|
TYPE:
|
cluster_name
|
TYPE:
|
custom_tags
|
TYPE:
|
data_security_mode
|
TYPE:
|
docker_image
|
TYPE:
|
driver_instance_pool_id
|
TYPE:
|
driver_node_type_flexibility
|
TYPE:
|
driver_node_type_id
|
TYPE:
|
enable_elastic_disk
|
TYPE:
|
enable_local_disk_encryption
|
TYPE:
|
gcp_attributes
|
TYPE:
|
idempotency_token
|
TYPE:
|
init_scripts
|
TYPE:
|
instance_pool_id
|
TYPE:
|
is_single_node
|
TYPE:
|
kind
|
TYPE:
|
library
|
(Set) An optional list of libraries to be installed on the cluster that will execute the job
TYPE:
|
node_type_id
|
TYPE:
|
num_workers
|
TYPE:
|
policy_id
|
TYPE:
|
remote_disk_throughput
|
TYPE:
|
runtime_engine
|
TYPE:
|
single_user_name
|
TYPE:
|
spark_conf
|
TYPE:
|
spark_env_vars
|
TYPE:
|
spark_version
|
TYPE:
|
ssh_public_keys
|
TYPE:
|
total_initial_remote_disk_size
|
TYPE:
|
use_ml_runtime
|
TYPE:
|
worker_node_type_flexibility
|
TYPE:
|
workload_type
|
isn't supported
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterAutoscale
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
max_workers
|
TYPE:
|
min_workers
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterAwsAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
ebs_volume_count
|
TYPE:
|
ebs_volume_iops
|
TYPE:
|
ebs_volume_size
|
TYPE:
|
ebs_volume_throughput
|
TYPE:
|
ebs_volume_type
|
TYPE:
|
first_on_demand
|
TYPE:
|
instance_profile_arn
|
TYPE:
|
spot_bid_price_percent
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterAzureAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
first_on_demand
|
TYPE:
|
log_analytics_info
|
TYPE:
|
spot_bid_max_price
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterAzureAttributesLogAnalyticsInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
log_analytics_primary_key
|
TYPE:
|
log_analytics_workspace_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterClusterLogConf
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbfs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterClusterLogConfDbfs
¤
laktory.models.resources.databricks.job.JobTaskNewClusterClusterLogConfS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterClusterLogConfVolumes
¤
laktory.models.resources.databricks.job.JobTaskNewClusterClusterMountInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
local_mount_dir_path
|
TYPE:
|
network_filesystem_info
|
TYPE:
|
remote_mount_dir_path
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
mount_options
|
TYPE:
|
server_address
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterDockerImage
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
basic_auth
|
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterDockerImageBasicAuth
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
password
|
TYPE:
|
username
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterDriverNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterGcpAttributes
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
availability
|
TYPE:
|
boot_disk_size
|
TYPE:
|
first_on_demand
|
TYPE:
|
google_service_account
|
TYPE:
|
local_ssd_count
|
TYPE:
|
use_preemptible_executors
|
TYPE:
|
zone_id
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterInitScripts
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
abfss
|
TYPE:
|
dbfs
|
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
gcs
|
TYPE:
|
s3
|
TYPE:
|
volumes
|
TYPE:
|
workspace
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsAbfss
¤
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsDbfs
¤
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsFile
¤
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsGcs
¤
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsS3
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
canned_acl
|
TYPE:
|
destination
|
TYPE:
|
enable_encryption
|
TYPE:
|
encryption_type
|
TYPE:
|
endpoint
|
TYPE:
|
kms_key
|
TYPE:
|
region
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsVolumes
¤
laktory.models.resources.databricks.job.JobTaskNewClusterInitScriptsWorkspace
¤
laktory.models.resources.databricks.job.JobTaskNewClusterLibrary
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
cran
|
TYPE:
|
egg
|
TYPE:
|
jar
|
TYPE:
|
maven
|
TYPE:
|
pypi
|
TYPE:
|
requirements
|
TYPE:
|
whl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterLibraryCran
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterLibraryMaven
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
coordinates
|
TYPE:
|
exclusions
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterLibraryPypi
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
package
|
TYPE:
|
repo
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterWorkerNodeTypeFlexibility
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alternate_node_type_ids
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterWorkloadType
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
clients
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNewClusterWorkloadTypeClients
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jobs
|
TYPE:
|
notebooks
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNotebookTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
base_parameters
|
(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in base_parameters and in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's base_parameters or the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using
TYPE:
|
notebook_path
|
The path of the databricks_notebook to be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskNotificationSettings
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_on_last_attempt
|
(Bool) do not send notifications to recipients specified in
TYPE:
|
no_alert_for_canceled_runs
|
(Bool) don't send alert for cancelled runs
TYPE:
|
no_alert_for_skipped_runs
|
(Bool) don't send alert for skipped runs
TYPE:
|
laktory.models.resources.databricks.job.JobTaskPipelineTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
full_refresh
|
(Bool) Specifies if there should be full refresh of the pipeline
TYPE:
|
pipeline_id
|
The pipeline's unique ID
TYPE:
|
laktory.models.resources.databricks.job.JobTaskPowerBiTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
connection_resource_name
|
TYPE:
|
power_bi_model
|
TYPE:
|
refresh_after_update
|
TYPE:
|
tables
|
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskPowerBiTaskPowerBiModel
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
authentication_method
|
TYPE:
|
model_name
|
TYPE:
|
overwrite_existing
|
TYPE:
|
storage_mode
|
TYPE:
|
workspace_name
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskPowerBiTaskTables
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
catalog
|
The name of the catalog to use inside Unity Catalog
TYPE:
|
name
|
The name of the defined parameter. May only contain alphanumeric characters,
TYPE:
|
schema_
|
The name of the schema dbt should run in. Defaults to
TYPE:
|
storage_mode
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskPythonWheelTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
entry_point
|
Python function as entry point for the task
TYPE:
|
named_parameters
|
Named parameters for the task
TYPE:
|
package_name
|
Name of Python package
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobTaskRunJobTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
dbt_commands
|
TYPE:
|
jar_params
|
TYPE:
|
job_id
|
(String) ID of the job
TYPE:
|
job_parameters
|
(Map) Job parameters for the task
TYPE:
|
notebook_params
|
TYPE:
|
pipeline_params
|
TYPE:
|
python_named_params
|
TYPE:
|
python_params
|
TYPE:
|
spark_submit_params
|
TYPE:
|
sql_params
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskRunJobTaskPipelineParams
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
full_refresh
|
(Bool) Specifies if there should be full refresh of the pipeline
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSparkJarTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
jar_uri
|
TYPE:
|
main_class_name
|
The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
run_as_repl
|
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSparkPythonTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
python_file
|
The URI of the Python file to be executed. Cloud file URIs (e.g.
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSparkSubmitTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTask
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert
|
block consisting of following fields:
TYPE:
|
dashboard
|
block consisting of following fields:
TYPE:
|
file
|
block consisting of single string fields:
TYPE:
|
parameters
|
(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters
TYPE:
|
query
|
block consisting of single string field:
TYPE:
|
warehouse_id
|
ID of the (the databricks_sql_endpoint) that will be used to execute the task. Only Serverless & Pro warehouses are supported right now
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskAlert
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
alert_id
|
(String) identifier of the Databricks Alert (databricks_alert)
TYPE:
|
pause_subscriptions
|
flag that specifies if subscriptions are paused or not
TYPE:
|
subscriptions
|
a list of subscription blocks consisting out of one of the required fields:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskAlertSubscriptions
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskDashboard
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
custom_subject
|
string specifying a custom subject of email sent
TYPE:
|
dashboard_id
|
(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard
TYPE:
|
pause_subscriptions
|
flag that specifies if subscriptions are paused or not
TYPE:
|
subscriptions
|
a list of subscription blocks consisting out of one of the required fields:
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskDashboardSubscriptions
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
destination_id
|
TYPE:
|
user_name
|
The email of an active workspace user. Non-admin users can only set this field to their own email
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskFile
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
path
|
If
TYPE:
|
source
|
The source of the project. Possible values are
TYPE:
|
laktory.models.resources.databricks.job.JobTaskSqlTaskQuery
¤
laktory.models.resources.databricks.job.JobTaskWebhookNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTaskWebhookNotificationsOnStreamingBacklogExceeded
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobTimeouts
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
create
|
TYPE:
|
update_
|
TYPE:
|
laktory.models.resources.databricks.job.JobTrigger
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
file_arrival
|
configuration block to define a trigger for File Arrival events consisting of following attributes:
TYPE:
|
model
|
TYPE:
|
pause_status
|
Indicate whether this trigger is paused or not. Either
TYPE:
|
periodic
|
configuration block to define a trigger for Periodic Triggers consisting of the following attributes:
TYPE:
|
table_update
|
configuration block to define a trigger for Table Updates consisting of following attributes:
TYPE:
|
laktory.models.resources.databricks.job.JobTriggerFileArrival
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
min_time_between_triggers_seconds
|
If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds
TYPE:
|
url
|
URL of the job on the given workspace
TYPE:
|
wait_after_last_change_seconds
|
If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds
TYPE:
|
laktory.models.resources.databricks.job.JobTriggerModel
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
aliases
|
TYPE:
|
condition
|
The table(s) condition based on which to trigger a job run. Possible values are
TYPE:
|
min_time_between_triggers_seconds
|
If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds
TYPE:
|
securable_name
|
TYPE:
|
wait_after_last_change_seconds
|
If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds
TYPE:
|
laktory.models.resources.databricks.job.JobTriggerPeriodic
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
interval
|
Specifies the interval at which the job should run
TYPE:
|
unit
|
The unit of time for the interval. Possible values are:
TYPE:
|
laktory.models.resources.databricks.job.JobTriggerTableUpdate
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
condition
|
The table(s) condition based on which to trigger a job run. Possible values are
TYPE:
|
min_time_between_triggers_seconds
|
If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds
TYPE:
|
table_names
|
A non-empty list of tables to monitor for changes. The table name must be in the format
TYPE:
|
wait_after_last_change_seconds
|
If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds
TYPE:
|
laktory.models.resources.databricks.job.JobWebhookNotifications
¤
Bases: BaseModel
| PARAMETER | DESCRIPTION |
|---|---|
on_duration_warning_threshold_exceeded
|
(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the
TYPE:
|
on_failure
|
(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified
TYPE:
|
on_start
|
(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified
TYPE:
|
on_streaming_backlog_exceeded
|
(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream
TYPE:
|
on_success
|
(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified
TYPE:
|
laktory.models.resources.databricks.job.JobWebhookNotificationsOnDurationWarningThresholdExceeded
¤
Bases: BaseModel
laktory.models.resources.databricks.job.JobWebhookNotificationsOnStreamingBacklogExceeded
¤
Bases: BaseModel